00:00:00.002 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2379 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3644 00:00:00.003 originally caused by: 00:00:00.003 Started by timer 00:00:00.108 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.109 The recommended git tool is: git 00:00:00.109 using credential 00000000-0000-0000-0000-000000000002 00:00:00.112 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.155 Fetching changes from the remote Git repository 00:00:00.157 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.201 Using shallow fetch with depth 1 00:00:00.201 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.201 > git --version # timeout=10 00:00:00.242 > git --version # 'git version 2.39.2' 00:00:00.242 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.269 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.269 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.670 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.682 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.696 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.696 > git config core.sparsecheckout # timeout=10 00:00:05.706 > git read-tree -mu HEAD # timeout=10 00:00:05.722 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.744 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.744 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.825 [Pipeline] Start of Pipeline 00:00:05.838 [Pipeline] library 00:00:05.840 Loading library shm_lib@master 00:00:05.840 Library shm_lib@master is cached. Copying from home. 00:00:05.857 [Pipeline] node 00:00:05.877 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.879 [Pipeline] { 00:00:05.891 [Pipeline] catchError 00:00:05.893 [Pipeline] { 00:00:05.907 [Pipeline] wrap 00:00:05.916 [Pipeline] { 00:00:05.925 [Pipeline] stage 00:00:05.927 [Pipeline] { (Prologue) 00:00:06.149 [Pipeline] sh 00:00:06.891 + logger -p user.info -t JENKINS-CI 00:00:06.909 [Pipeline] echo 00:00:06.911 Node: GP11 00:00:06.918 [Pipeline] sh 00:00:07.250 [Pipeline] setCustomBuildProperty 00:00:07.264 [Pipeline] echo 00:00:07.266 Cleanup processes 00:00:07.272 [Pipeline] sh 00:00:07.557 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.557 4759 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.571 [Pipeline] sh 00:00:07.858 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.858 ++ grep -v 'sudo pgrep' 00:00:07.858 ++ awk '{print $1}' 00:00:07.858 + sudo kill -9 00:00:07.858 + true 00:00:07.873 [Pipeline] cleanWs 00:00:07.884 [WS-CLEANUP] Deleting project workspace... 00:00:07.884 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.895 [WS-CLEANUP] done 00:00:07.898 [Pipeline] setCustomBuildProperty 00:00:07.910 [Pipeline] sh 00:00:08.190 + sudo git config --global --replace-all safe.directory '*' 00:00:08.288 [Pipeline] httpRequest 00:00:10.404 [Pipeline] echo 00:00:10.406 Sorcerer 10.211.164.20 is alive 00:00:10.416 [Pipeline] retry 00:00:10.418 [Pipeline] { 00:00:10.431 [Pipeline] httpRequest 00:00:10.437 HttpMethod: GET 00:00:10.437 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.439 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.462 Response Code: HTTP/1.1 200 OK 00:00:10.462 Success: Status code 200 is in the accepted range: 200,404 00:00:10.463 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.335 [Pipeline] } 00:00:14.351 [Pipeline] // retry 00:00:14.358 [Pipeline] sh 00:00:14.655 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.683 [Pipeline] httpRequest 00:00:15.055 [Pipeline] echo 00:00:15.057 Sorcerer 10.211.164.20 is alive 00:00:15.067 [Pipeline] retry 00:00:15.069 [Pipeline] { 00:00:15.085 [Pipeline] httpRequest 00:00:15.091 HttpMethod: GET 00:00:15.091 URL: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:15.092 Sending request to url: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:15.107 Response Code: HTTP/1.1 200 OK 00:00:15.108 Success: Status code 200 is in the accepted range: 200,404 00:00:15.108 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:02.880 [Pipeline] } 00:01:02.898 [Pipeline] // retry 00:01:02.906 [Pipeline] sh 00:01:03.203 + tar --no-same-owner -xf spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:05.760 [Pipeline] sh 00:01:06.057 + git -C spdk log --oneline -n5 00:01:06.057 d47eb51c9 bdev: fix a race between reset start and complete 00:01:06.057 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:06.057 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:06.057 4bcab9fb9 correct kick for CQ full case 00:01:06.057 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:06.080 [Pipeline] withCredentials 00:01:06.093 > git --version # timeout=10 00:01:06.107 > git --version # 'git version 2.39.2' 00:01:06.139 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:06.141 [Pipeline] { 00:01:06.150 [Pipeline] retry 00:01:06.153 [Pipeline] { 00:01:06.168 [Pipeline] sh 00:01:06.717 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:06.997 [Pipeline] } 00:01:07.017 [Pipeline] // retry 00:01:07.022 [Pipeline] } 00:01:07.039 [Pipeline] // withCredentials 00:01:07.049 [Pipeline] httpRequest 00:01:07.439 [Pipeline] echo 00:01:07.441 Sorcerer 10.211.164.20 is alive 00:01:07.451 [Pipeline] retry 00:01:07.453 [Pipeline] { 00:01:07.468 [Pipeline] httpRequest 00:01:07.473 HttpMethod: GET 00:01:07.474 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:07.475 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:07.479 Response Code: HTTP/1.1 200 OK 00:01:07.480 Success: Status code 200 is in the accepted range: 200,404 00:01:07.480 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:18.556 [Pipeline] } 00:01:18.577 [Pipeline] // retry 00:01:18.584 [Pipeline] sh 00:01:18.885 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:20.824 [Pipeline] sh 00:01:21.119 + git -C dpdk log --oneline -n5 00:01:21.119 caf0f5d395 version: 22.11.4 00:01:21.119 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:21.119 dc9c799c7d vhost: fix missing spinlock unlock 00:01:21.119 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:21.119 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:21.132 [Pipeline] } 00:01:21.147 [Pipeline] // stage 00:01:21.156 [Pipeline] stage 00:01:21.159 [Pipeline] { (Prepare) 00:01:21.179 [Pipeline] writeFile 00:01:21.195 [Pipeline] sh 00:01:21.490 + logger -p user.info -t JENKINS-CI 00:01:21.506 [Pipeline] sh 00:01:21.796 + logger -p user.info -t JENKINS-CI 00:01:21.807 [Pipeline] sh 00:01:22.092 + cat autorun-spdk.conf 00:01:22.092 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.092 SPDK_TEST_NVMF=1 00:01:22.092 SPDK_TEST_NVME_CLI=1 00:01:22.092 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.092 SPDK_TEST_NVMF_NICS=e810 00:01:22.092 SPDK_TEST_VFIOUSER=1 00:01:22.092 SPDK_RUN_UBSAN=1 00:01:22.092 NET_TYPE=phy 00:01:22.092 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:22.092 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.101 RUN_NIGHTLY=1 00:01:22.106 [Pipeline] readFile 00:01:22.144 [Pipeline] withEnv 00:01:22.146 [Pipeline] { 00:01:22.158 [Pipeline] sh 00:01:22.447 + set -ex 00:01:22.447 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:22.447 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:22.447 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.447 ++ SPDK_TEST_NVMF=1 00:01:22.447 ++ SPDK_TEST_NVME_CLI=1 00:01:22.447 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.447 ++ SPDK_TEST_NVMF_NICS=e810 00:01:22.447 ++ SPDK_TEST_VFIOUSER=1 00:01:22.447 ++ SPDK_RUN_UBSAN=1 00:01:22.447 ++ NET_TYPE=phy 00:01:22.447 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:22.447 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.447 ++ RUN_NIGHTLY=1 00:01:22.447 + case $SPDK_TEST_NVMF_NICS in 00:01:22.447 + DRIVERS=ice 00:01:22.447 + [[ tcp == \r\d\m\a ]] 00:01:22.447 + [[ -n ice ]] 00:01:22.447 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:22.447 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:26.664 rmmod: ERROR: Module irdma is not currently loaded 00:01:26.664 rmmod: ERROR: Module i40iw is not currently loaded 00:01:26.664 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:26.664 + true 00:01:26.664 + for D in $DRIVERS 00:01:26.664 + sudo modprobe ice 00:01:26.664 + exit 0 00:01:26.676 [Pipeline] } 00:01:26.692 [Pipeline] // withEnv 00:01:26.698 [Pipeline] } 00:01:26.712 [Pipeline] // stage 00:01:26.722 [Pipeline] catchError 00:01:26.724 [Pipeline] { 00:01:26.737 [Pipeline] timeout 00:01:26.737 Timeout set to expire in 1 hr 0 min 00:01:26.739 [Pipeline] { 00:01:26.753 [Pipeline] stage 00:01:26.755 [Pipeline] { (Tests) 00:01:26.770 [Pipeline] sh 00:01:27.063 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.063 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.063 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.063 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:27.063 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.063 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:27.063 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:27.063 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:27.063 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:27.063 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:27.063 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:27.063 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.063 + source /etc/os-release 00:01:27.063 ++ NAME='Fedora Linux' 00:01:27.063 ++ VERSION='39 (Cloud Edition)' 00:01:27.063 ++ ID=fedora 00:01:27.063 ++ VERSION_ID=39 00:01:27.063 ++ VERSION_CODENAME= 00:01:27.063 ++ PLATFORM_ID=platform:f39 00:01:27.063 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:27.063 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:27.063 ++ LOGO=fedora-logo-icon 00:01:27.063 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:27.063 ++ HOME_URL=https://fedoraproject.org/ 00:01:27.063 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:27.063 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:27.063 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:27.063 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:27.063 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:27.063 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:27.063 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:27.063 ++ SUPPORT_END=2024-11-12 00:01:27.063 ++ VARIANT='Cloud Edition' 00:01:27.063 ++ VARIANT_ID=cloud 00:01:27.063 + uname -a 00:01:27.063 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:27.063 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:28.006 Hugepages 00:01:28.006 node hugesize free / total 00:01:28.006 node0 1048576kB 0 / 0 00:01:28.006 node0 2048kB 0 / 0 00:01:28.006 node1 1048576kB 0 / 0 00:01:28.006 node1 2048kB 0 / 0 00:01:28.006 00:01:28.006 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:28.006 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:28.006 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:28.006 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:28.006 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:28.006 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:28.006 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:28.006 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:28.006 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:28.006 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:28.006 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:28.006 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:28.006 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:28.006 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:28.006 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:28.006 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:28.006 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:28.006 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:28.006 + rm -f /tmp/spdk-ld-path 00:01:28.267 + source autorun-spdk.conf 00:01:28.267 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.267 ++ SPDK_TEST_NVMF=1 00:01:28.267 ++ SPDK_TEST_NVME_CLI=1 00:01:28.267 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.267 ++ SPDK_TEST_NVMF_NICS=e810 00:01:28.267 ++ SPDK_TEST_VFIOUSER=1 00:01:28.267 ++ SPDK_RUN_UBSAN=1 00:01:28.267 ++ NET_TYPE=phy 00:01:28.267 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:28.267 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.267 ++ RUN_NIGHTLY=1 00:01:28.267 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:28.267 + [[ -n '' ]] 00:01:28.267 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.267 + for M in /var/spdk/build-*-manifest.txt 00:01:28.267 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:28.267 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.267 + for M in /var/spdk/build-*-manifest.txt 00:01:28.267 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:28.267 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.267 + for M in /var/spdk/build-*-manifest.txt 00:01:28.267 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:28.267 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.267 ++ uname 00:01:28.267 + [[ Linux == \L\i\n\u\x ]] 00:01:28.267 + sudo dmesg -T 00:01:28.267 + sudo dmesg --clear 00:01:28.267 + dmesg_pid=6077 00:01:28.267 + [[ Fedora Linux == FreeBSD ]] 00:01:28.267 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.267 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.267 + sudo dmesg -Tw 00:01:28.267 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:28.267 + [[ -x /usr/src/fio-static/fio ]] 00:01:28.267 + export FIO_BIN=/usr/src/fio-static/fio 00:01:28.267 + FIO_BIN=/usr/src/fio-static/fio 00:01:28.267 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:28.267 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:28.267 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:28.267 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.267 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.267 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:28.267 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.267 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.267 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.267 02:42:38 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:28.267 02:42:38 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.267 02:42:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.267 02:42:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:28.267 02:42:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:28.267 02:42:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.267 02:42:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:28.267 02:42:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:28.267 02:42:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:28.267 02:42:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:28.267 02:42:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:28.267 02:42:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.267 02:42:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:01:28.267 02:42:38 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:28.267 02:42:38 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.267 02:42:38 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:28.267 02:42:38 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:28.267 02:42:38 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:28.267 02:42:38 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:28.267 02:42:38 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:28.267 02:42:38 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:28.268 02:42:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.268 02:42:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.268 02:42:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.268 02:42:38 -- paths/export.sh@5 -- $ export PATH 00:01:28.268 02:42:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.268 02:42:38 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:28.268 02:42:38 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:28.268 02:42:38 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731980558.XXXXXX 00:01:28.268 02:42:38 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731980558.RcK784 00:01:28.268 02:42:38 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:28.268 02:42:38 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:01:28.268 02:42:38 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.268 02:42:38 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:28.268 02:42:38 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:28.268 02:42:38 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:28.268 02:42:38 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:28.268 02:42:38 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:28.268 02:42:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.268 02:42:38 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:28.268 02:42:38 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:28.268 02:42:38 -- pm/common@17 -- $ local monitor 00:01:28.268 02:42:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.268 02:42:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.268 02:42:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.268 02:42:38 -- pm/common@21 -- $ date +%s 00:01:28.268 02:42:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.268 02:42:38 -- pm/common@21 -- $ date +%s 00:01:28.268 02:42:38 -- pm/common@25 -- $ sleep 1 00:01:28.268 02:42:38 -- pm/common@21 -- $ date +%s 00:01:28.268 02:42:38 -- pm/common@21 -- $ date +%s 00:01:28.268 02:42:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731980558 00:01:28.268 02:42:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731980558 00:01:28.268 02:42:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731980558 00:01:28.268 02:42:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731980558 00:01:28.268 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731980558_collect-cpu-load.pm.log 00:01:28.268 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731980558_collect-cpu-temp.pm.log 00:01:28.268 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731980558_collect-vmstat.pm.log 00:01:28.268 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731980558_collect-bmc-pm.bmc.pm.log 00:01:29.244 02:42:39 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:29.244 02:42:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:29.244 02:42:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:29.244 02:42:39 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.244 02:42:39 -- spdk/autobuild.sh@16 -- $ date -u 00:01:29.244 Tue Nov 19 01:42:39 AM UTC 2024 00:01:29.244 02:42:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:29.506 v25.01-pre-190-gd47eb51c9 00:01:29.506 02:42:39 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:29.506 02:42:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:29.506 02:42:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:29.506 02:42:39 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:29.506 02:42:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:29.506 02:42:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.506 ************************************ 00:01:29.506 START TEST ubsan 00:01:29.506 ************************************ 00:01:29.506 02:42:39 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:29.506 using ubsan 00:01:29.506 00:01:29.506 real 0m0.000s 00:01:29.506 user 0m0.000s 00:01:29.506 sys 0m0.000s 00:01:29.506 02:42:39 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:29.506 02:42:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.506 ************************************ 00:01:29.506 END TEST ubsan 00:01:29.506 ************************************ 00:01:29.506 02:42:39 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:29.506 02:42:39 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:29.506 02:42:39 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:29.506 02:42:39 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:29.506 02:42:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:29.506 02:42:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.506 ************************************ 00:01:29.506 START TEST build_native_dpdk 00:01:29.506 ************************************ 00:01:29.506 02:42:39 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:29.506 caf0f5d395 version: 22.11.4 00:01:29.506 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:29.506 dc9c799c7d vhost: fix missing spinlock unlock 00:01:29.506 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:29.506 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:29.506 02:42:39 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:29.507 02:42:39 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:29.507 02:42:39 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:29.507 02:42:39 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:29.507 02:42:39 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:29.507 02:42:39 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:29.507 02:42:39 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:29.507 02:42:39 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:29.507 02:42:39 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:29.507 02:42:40 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:29.507 02:42:40 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:29.507 02:42:40 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:29.507 patching file config/rte_config.h 00:01:29.507 Hunk #1 succeeded at 60 (offset 1 line). 00:01:29.507 02:42:40 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:29.507 02:42:40 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:29.507 patching file lib/pcapng/rte_pcapng.c 00:01:29.507 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:29.507 02:42:40 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:29.507 02:42:40 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:29.507 02:42:40 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:29.507 02:42:40 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:29.507 02:42:40 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:29.507 02:42:40 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:29.507 02:42:40 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:36.099 The Meson build system 00:01:36.100 Version: 1.5.0 00:01:36.100 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:36.100 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:36.100 Build type: native build 00:01:36.100 Program cat found: YES (/usr/bin/cat) 00:01:36.100 Project name: DPDK 00:01:36.100 Project version: 22.11.4 00:01:36.100 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:36.100 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:36.100 Host machine cpu family: x86_64 00:01:36.100 Host machine cpu: x86_64 00:01:36.100 Message: ## Building in Developer Mode ## 00:01:36.100 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:36.100 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:36.100 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:36.100 Program objdump found: YES (/usr/bin/objdump) 00:01:36.100 Program python3 found: YES (/usr/bin/python3) 00:01:36.100 Program cat found: YES (/usr/bin/cat) 00:01:36.100 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:36.100 Checking for size of "void *" : 8 00:01:36.100 Checking for size of "void *" : 8 (cached) 00:01:36.100 Library m found: YES 00:01:36.100 Library numa found: YES 00:01:36.100 Has header "numaif.h" : YES 00:01:36.100 Library fdt found: NO 00:01:36.100 Library execinfo found: NO 00:01:36.100 Has header "execinfo.h" : YES 00:01:36.100 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:36.100 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:36.100 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:36.100 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:36.100 Run-time dependency openssl found: YES 3.1.1 00:01:36.100 Run-time dependency libpcap found: YES 1.10.4 00:01:36.100 Has header "pcap.h" with dependency libpcap: YES 00:01:36.100 Compiler for C supports arguments -Wcast-qual: YES 00:01:36.100 Compiler for C supports arguments -Wdeprecated: YES 00:01:36.100 Compiler for C supports arguments -Wformat: YES 00:01:36.100 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:36.100 Compiler for C supports arguments -Wformat-security: NO 00:01:36.100 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.100 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:36.100 Compiler for C supports arguments -Wnested-externs: YES 00:01:36.100 Compiler for C supports arguments -Wold-style-definition: YES 00:01:36.100 Compiler for C supports arguments -Wpointer-arith: YES 00:01:36.100 Compiler for C supports arguments -Wsign-compare: YES 00:01:36.100 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:36.100 Compiler for C supports arguments -Wundef: YES 00:01:36.100 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.100 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:36.100 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:36.100 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.100 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:36.100 Compiler for C supports arguments -mavx512f: YES 00:01:36.100 Checking if "AVX512 checking" compiles: YES 00:01:36.100 Fetching value of define "__SSE4_2__" : 1 00:01:36.100 Fetching value of define "__AES__" : 1 00:01:36.100 Fetching value of define "__AVX__" : 1 00:01:36.100 Fetching value of define "__AVX2__" : (undefined) 00:01:36.100 Fetching value of define "__AVX512BW__" : (undefined) 00:01:36.100 Fetching value of define "__AVX512CD__" : (undefined) 00:01:36.100 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:36.100 Fetching value of define "__AVX512F__" : (undefined) 00:01:36.100 Fetching value of define "__AVX512VL__" : (undefined) 00:01:36.100 Fetching value of define "__PCLMUL__" : 1 00:01:36.100 Fetching value of define "__RDRND__" : 1 00:01:36.100 Fetching value of define "__RDSEED__" : (undefined) 00:01:36.100 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:36.100 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:36.100 Message: lib/kvargs: Defining dependency "kvargs" 00:01:36.100 Message: lib/telemetry: Defining dependency "telemetry" 00:01:36.100 Checking for function "getentropy" : YES 00:01:36.100 Message: lib/eal: Defining dependency "eal" 00:01:36.100 Message: lib/ring: Defining dependency "ring" 00:01:36.100 Message: lib/rcu: Defining dependency "rcu" 00:01:36.100 Message: lib/mempool: Defining dependency "mempool" 00:01:36.100 Message: lib/mbuf: Defining dependency "mbuf" 00:01:36.100 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:36.100 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:36.100 Compiler for C supports arguments -mpclmul: YES 00:01:36.100 Compiler for C supports arguments -maes: YES 00:01:36.100 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:36.100 Compiler for C supports arguments -mavx512bw: YES 00:01:36.100 Compiler for C supports arguments -mavx512dq: YES 00:01:36.100 Compiler for C supports arguments -mavx512vl: YES 00:01:36.100 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:36.100 Compiler for C supports arguments -mavx2: YES 00:01:36.100 Compiler for C supports arguments -mavx: YES 00:01:36.100 Message: lib/net: Defining dependency "net" 00:01:36.100 Message: lib/meter: Defining dependency "meter" 00:01:36.100 Message: lib/ethdev: Defining dependency "ethdev" 00:01:36.100 Message: lib/pci: Defining dependency "pci" 00:01:36.100 Message: lib/cmdline: Defining dependency "cmdline" 00:01:36.100 Message: lib/metrics: Defining dependency "metrics" 00:01:36.100 Message: lib/hash: Defining dependency "hash" 00:01:36.100 Message: lib/timer: Defining dependency "timer" 00:01:36.100 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:36.100 Compiler for C supports arguments -mavx2: YES (cached) 00:01:36.100 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:36.100 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:36.100 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:36.100 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:36.100 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:36.100 Message: lib/acl: Defining dependency "acl" 00:01:36.100 Message: lib/bbdev: Defining dependency "bbdev" 00:01:36.100 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:36.100 Run-time dependency libelf found: YES 0.191 00:01:36.100 Message: lib/bpf: Defining dependency "bpf" 00:01:36.100 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:36.100 Message: lib/compressdev: Defining dependency "compressdev" 00:01:36.100 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:36.100 Message: lib/distributor: Defining dependency "distributor" 00:01:36.100 Message: lib/efd: Defining dependency "efd" 00:01:36.100 Message: lib/eventdev: Defining dependency "eventdev" 00:01:36.100 Message: lib/gpudev: Defining dependency "gpudev" 00:01:36.100 Message: lib/gro: Defining dependency "gro" 00:01:36.100 Message: lib/gso: Defining dependency "gso" 00:01:36.100 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:36.100 Message: lib/jobstats: Defining dependency "jobstats" 00:01:36.100 Message: lib/latencystats: Defining dependency "latencystats" 00:01:36.100 Message: lib/lpm: Defining dependency "lpm" 00:01:36.100 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:36.100 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:36.100 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:36.100 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:36.100 Message: lib/member: Defining dependency "member" 00:01:36.100 Message: lib/pcapng: Defining dependency "pcapng" 00:01:36.100 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:36.100 Message: lib/power: Defining dependency "power" 00:01:36.100 Message: lib/rawdev: Defining dependency "rawdev" 00:01:36.100 Message: lib/regexdev: Defining dependency "regexdev" 00:01:36.100 Message: lib/dmadev: Defining dependency "dmadev" 00:01:36.100 Message: lib/rib: Defining dependency "rib" 00:01:36.100 Message: lib/reorder: Defining dependency "reorder" 00:01:36.100 Message: lib/sched: Defining dependency "sched" 00:01:36.100 Message: lib/security: Defining dependency "security" 00:01:36.100 Message: lib/stack: Defining dependency "stack" 00:01:36.100 Has header "linux/userfaultfd.h" : YES 00:01:36.100 Message: lib/vhost: Defining dependency "vhost" 00:01:36.100 Message: lib/ipsec: Defining dependency "ipsec" 00:01:36.100 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:36.100 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:36.100 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:36.100 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:36.100 Message: lib/fib: Defining dependency "fib" 00:01:36.100 Message: lib/port: Defining dependency "port" 00:01:36.100 Message: lib/pdump: Defining dependency "pdump" 00:01:36.100 Message: lib/table: Defining dependency "table" 00:01:36.100 Message: lib/pipeline: Defining dependency "pipeline" 00:01:36.100 Message: lib/graph: Defining dependency "graph" 00:01:36.100 Message: lib/node: Defining dependency "node" 00:01:36.100 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:36.100 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:36.100 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:36.100 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:36.100 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:36.100 Compiler for C supports arguments -Wno-unused-value: YES 00:01:37.044 Compiler for C supports arguments -Wno-format: YES 00:01:37.044 Compiler for C supports arguments -Wno-format-security: YES 00:01:37.044 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:37.044 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:37.044 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:37.044 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:37.044 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:37.044 Compiler for C supports arguments -mavx2: YES (cached) 00:01:37.044 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:37.044 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:37.044 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:37.044 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:37.044 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:37.044 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:37.044 Configuring doxy-api.conf using configuration 00:01:37.044 Program sphinx-build found: NO 00:01:37.044 Configuring rte_build_config.h using configuration 00:01:37.044 Message: 00:01:37.044 ================= 00:01:37.044 Applications Enabled 00:01:37.044 ================= 00:01:37.044 00:01:37.044 apps: 00:01:37.044 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:37.044 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:37.044 test-security-perf, 00:01:37.044 00:01:37.044 Message: 00:01:37.044 ================= 00:01:37.044 Libraries Enabled 00:01:37.044 ================= 00:01:37.044 00:01:37.044 libs: 00:01:37.044 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:37.044 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:37.044 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:37.044 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:37.044 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:37.044 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:37.044 table, pipeline, graph, node, 00:01:37.044 00:01:37.044 Message: 00:01:37.044 =============== 00:01:37.044 Drivers Enabled 00:01:37.044 =============== 00:01:37.044 00:01:37.044 common: 00:01:37.044 00:01:37.044 bus: 00:01:37.044 pci, vdev, 00:01:37.044 mempool: 00:01:37.044 ring, 00:01:37.044 dma: 00:01:37.044 00:01:37.044 net: 00:01:37.044 i40e, 00:01:37.044 raw: 00:01:37.044 00:01:37.044 crypto: 00:01:37.044 00:01:37.044 compress: 00:01:37.044 00:01:37.044 regex: 00:01:37.044 00:01:37.044 vdpa: 00:01:37.044 00:01:37.044 event: 00:01:37.044 00:01:37.044 baseband: 00:01:37.044 00:01:37.044 gpu: 00:01:37.044 00:01:37.044 00:01:37.044 Message: 00:01:37.044 ================= 00:01:37.044 Content Skipped 00:01:37.044 ================= 00:01:37.044 00:01:37.044 apps: 00:01:37.044 00:01:37.044 libs: 00:01:37.044 kni: explicitly disabled via build config (deprecated lib) 00:01:37.044 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:37.044 00:01:37.044 drivers: 00:01:37.044 common/cpt: not in enabled drivers build config 00:01:37.044 common/dpaax: not in enabled drivers build config 00:01:37.044 common/iavf: not in enabled drivers build config 00:01:37.044 common/idpf: not in enabled drivers build config 00:01:37.044 common/mvep: not in enabled drivers build config 00:01:37.044 common/octeontx: not in enabled drivers build config 00:01:37.044 bus/auxiliary: not in enabled drivers build config 00:01:37.044 bus/dpaa: not in enabled drivers build config 00:01:37.044 bus/fslmc: not in enabled drivers build config 00:01:37.044 bus/ifpga: not in enabled drivers build config 00:01:37.044 bus/vmbus: not in enabled drivers build config 00:01:37.044 common/cnxk: not in enabled drivers build config 00:01:37.044 common/mlx5: not in enabled drivers build config 00:01:37.044 common/qat: not in enabled drivers build config 00:01:37.044 common/sfc_efx: not in enabled drivers build config 00:01:37.044 mempool/bucket: not in enabled drivers build config 00:01:37.044 mempool/cnxk: not in enabled drivers build config 00:01:37.044 mempool/dpaa: not in enabled drivers build config 00:01:37.044 mempool/dpaa2: not in enabled drivers build config 00:01:37.044 mempool/octeontx: not in enabled drivers build config 00:01:37.044 mempool/stack: not in enabled drivers build config 00:01:37.044 dma/cnxk: not in enabled drivers build config 00:01:37.044 dma/dpaa: not in enabled drivers build config 00:01:37.044 dma/dpaa2: not in enabled drivers build config 00:01:37.044 dma/hisilicon: not in enabled drivers build config 00:01:37.044 dma/idxd: not in enabled drivers build config 00:01:37.044 dma/ioat: not in enabled drivers build config 00:01:37.044 dma/skeleton: not in enabled drivers build config 00:01:37.044 net/af_packet: not in enabled drivers build config 00:01:37.044 net/af_xdp: not in enabled drivers build config 00:01:37.044 net/ark: not in enabled drivers build config 00:01:37.044 net/atlantic: not in enabled drivers build config 00:01:37.044 net/avp: not in enabled drivers build config 00:01:37.044 net/axgbe: not in enabled drivers build config 00:01:37.044 net/bnx2x: not in enabled drivers build config 00:01:37.044 net/bnxt: not in enabled drivers build config 00:01:37.044 net/bonding: not in enabled drivers build config 00:01:37.044 net/cnxk: not in enabled drivers build config 00:01:37.044 net/cxgbe: not in enabled drivers build config 00:01:37.044 net/dpaa: not in enabled drivers build config 00:01:37.044 net/dpaa2: not in enabled drivers build config 00:01:37.045 net/e1000: not in enabled drivers build config 00:01:37.045 net/ena: not in enabled drivers build config 00:01:37.045 net/enetc: not in enabled drivers build config 00:01:37.045 net/enetfec: not in enabled drivers build config 00:01:37.045 net/enic: not in enabled drivers build config 00:01:37.045 net/failsafe: not in enabled drivers build config 00:01:37.045 net/fm10k: not in enabled drivers build config 00:01:37.045 net/gve: not in enabled drivers build config 00:01:37.045 net/hinic: not in enabled drivers build config 00:01:37.045 net/hns3: not in enabled drivers build config 00:01:37.045 net/iavf: not in enabled drivers build config 00:01:37.045 net/ice: not in enabled drivers build config 00:01:37.045 net/idpf: not in enabled drivers build config 00:01:37.045 net/igc: not in enabled drivers build config 00:01:37.045 net/ionic: not in enabled drivers build config 00:01:37.045 net/ipn3ke: not in enabled drivers build config 00:01:37.045 net/ixgbe: not in enabled drivers build config 00:01:37.045 net/kni: not in enabled drivers build config 00:01:37.045 net/liquidio: not in enabled drivers build config 00:01:37.045 net/mana: not in enabled drivers build config 00:01:37.045 net/memif: not in enabled drivers build config 00:01:37.045 net/mlx4: not in enabled drivers build config 00:01:37.045 net/mlx5: not in enabled drivers build config 00:01:37.045 net/mvneta: not in enabled drivers build config 00:01:37.045 net/mvpp2: not in enabled drivers build config 00:01:37.045 net/netvsc: not in enabled drivers build config 00:01:37.045 net/nfb: not in enabled drivers build config 00:01:37.045 net/nfp: not in enabled drivers build config 00:01:37.045 net/ngbe: not in enabled drivers build config 00:01:37.045 net/null: not in enabled drivers build config 00:01:37.045 net/octeontx: not in enabled drivers build config 00:01:37.045 net/octeon_ep: not in enabled drivers build config 00:01:37.045 net/pcap: not in enabled drivers build config 00:01:37.045 net/pfe: not in enabled drivers build config 00:01:37.045 net/qede: not in enabled drivers build config 00:01:37.045 net/ring: not in enabled drivers build config 00:01:37.045 net/sfc: not in enabled drivers build config 00:01:37.045 net/softnic: not in enabled drivers build config 00:01:37.045 net/tap: not in enabled drivers build config 00:01:37.045 net/thunderx: not in enabled drivers build config 00:01:37.045 net/txgbe: not in enabled drivers build config 00:01:37.045 net/vdev_netvsc: not in enabled drivers build config 00:01:37.045 net/vhost: not in enabled drivers build config 00:01:37.045 net/virtio: not in enabled drivers build config 00:01:37.045 net/vmxnet3: not in enabled drivers build config 00:01:37.045 raw/cnxk_bphy: not in enabled drivers build config 00:01:37.045 raw/cnxk_gpio: not in enabled drivers build config 00:01:37.045 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:37.045 raw/ifpga: not in enabled drivers build config 00:01:37.045 raw/ntb: not in enabled drivers build config 00:01:37.045 raw/skeleton: not in enabled drivers build config 00:01:37.045 crypto/armv8: not in enabled drivers build config 00:01:37.045 crypto/bcmfs: not in enabled drivers build config 00:01:37.045 crypto/caam_jr: not in enabled drivers build config 00:01:37.045 crypto/ccp: not in enabled drivers build config 00:01:37.045 crypto/cnxk: not in enabled drivers build config 00:01:37.045 crypto/dpaa_sec: not in enabled drivers build config 00:01:37.045 crypto/dpaa2_sec: not in enabled drivers build config 00:01:37.045 crypto/ipsec_mb: not in enabled drivers build config 00:01:37.045 crypto/mlx5: not in enabled drivers build config 00:01:37.045 crypto/mvsam: not in enabled drivers build config 00:01:37.045 crypto/nitrox: not in enabled drivers build config 00:01:37.045 crypto/null: not in enabled drivers build config 00:01:37.045 crypto/octeontx: not in enabled drivers build config 00:01:37.045 crypto/openssl: not in enabled drivers build config 00:01:37.045 crypto/scheduler: not in enabled drivers build config 00:01:37.045 crypto/uadk: not in enabled drivers build config 00:01:37.045 crypto/virtio: not in enabled drivers build config 00:01:37.045 compress/isal: not in enabled drivers build config 00:01:37.045 compress/mlx5: not in enabled drivers build config 00:01:37.045 compress/octeontx: not in enabled drivers build config 00:01:37.045 compress/zlib: not in enabled drivers build config 00:01:37.045 regex/mlx5: not in enabled drivers build config 00:01:37.045 regex/cn9k: not in enabled drivers build config 00:01:37.045 vdpa/ifc: not in enabled drivers build config 00:01:37.045 vdpa/mlx5: not in enabled drivers build config 00:01:37.045 vdpa/sfc: not in enabled drivers build config 00:01:37.045 event/cnxk: not in enabled drivers build config 00:01:37.045 event/dlb2: not in enabled drivers build config 00:01:37.045 event/dpaa: not in enabled drivers build config 00:01:37.045 event/dpaa2: not in enabled drivers build config 00:01:37.045 event/dsw: not in enabled drivers build config 00:01:37.045 event/opdl: not in enabled drivers build config 00:01:37.045 event/skeleton: not in enabled drivers build config 00:01:37.045 event/sw: not in enabled drivers build config 00:01:37.045 event/octeontx: not in enabled drivers build config 00:01:37.045 baseband/acc: not in enabled drivers build config 00:01:37.045 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:37.045 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:37.045 baseband/la12xx: not in enabled drivers build config 00:01:37.045 baseband/null: not in enabled drivers build config 00:01:37.045 baseband/turbo_sw: not in enabled drivers build config 00:01:37.045 gpu/cuda: not in enabled drivers build config 00:01:37.045 00:01:37.045 00:01:37.045 Build targets in project: 316 00:01:37.045 00:01:37.045 DPDK 22.11.4 00:01:37.045 00:01:37.045 User defined options 00:01:37.045 libdir : lib 00:01:37.045 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:37.045 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:37.045 c_link_args : 00:01:37.045 enable_docs : false 00:01:37.045 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:37.045 enable_kmods : false 00:01:37.045 machine : native 00:01:37.045 tests : false 00:01:37.045 00:01:37.045 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.045 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:37.045 02:42:47 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:37.309 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:37.309 [1/745] Generating lib/rte_kvargs_def with a custom command 00:01:37.309 [2/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:37.309 [3/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:37.309 [4/745] Generating lib/rte_telemetry_def with a custom command 00:01:37.309 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:37.309 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:37.309 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:37.309 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:37.309 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:37.309 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:37.309 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:37.309 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:37.309 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:37.310 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:37.310 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:37.571 [16/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:37.571 [17/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:37.571 [18/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:37.571 [19/745] Linking static target lib/librte_kvargs.a 00:01:37.571 [20/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:37.571 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:37.571 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:37.571 [23/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:37.571 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:37.571 [25/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:37.571 [26/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:37.571 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:37.571 [28/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:37.571 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:37.571 [30/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:37.571 [31/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:37.571 [32/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:37.571 [33/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:37.571 [34/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:37.571 [35/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:37.571 [36/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:37.571 [37/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:37.571 [38/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:37.571 [39/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:37.571 [40/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:37.571 [41/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:37.571 [42/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:37.571 [43/745] Generating lib/rte_eal_mingw with a custom command 00:01:37.571 [44/745] Generating lib/rte_eal_def with a custom command 00:01:37.571 [45/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:37.571 [46/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:37.571 [47/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:37.571 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:37.571 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:37.571 [50/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:37.571 [51/745] Generating lib/rte_ring_def with a custom command 00:01:37.571 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:37.571 [53/745] Generating lib/rte_ring_mingw with a custom command 00:01:37.571 [54/745] Generating lib/rte_rcu_def with a custom command 00:01:37.571 [55/745] Generating lib/rte_rcu_mingw with a custom command 00:01:37.571 [56/745] Generating lib/rte_mempool_mingw with a custom command 00:01:37.571 [57/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:37.571 [58/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:37.571 [59/745] Generating lib/rte_mempool_def with a custom command 00:01:37.571 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:37.571 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:37.571 [62/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:37.571 [63/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:37.836 [64/745] Generating lib/rte_mbuf_def with a custom command 00:01:37.836 [65/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:37.836 [66/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:37.836 [67/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:37.836 [68/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:37.836 [69/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:37.836 [70/745] Generating lib/rte_net_def with a custom command 00:01:37.836 [71/745] Generating lib/rte_net_mingw with a custom command 00:01:37.836 [72/745] Generating lib/rte_meter_def with a custom command 00:01:37.836 [73/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:37.836 [74/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:37.836 [75/745] Generating lib/rte_meter_mingw with a custom command 00:01:37.836 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:37.836 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:37.836 [78/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:37.836 [79/745] Linking static target lib/librte_ring.a 00:01:37.836 [80/745] Generating lib/rte_ethdev_def with a custom command 00:01:37.836 [81/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.836 [82/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:37.836 [83/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:37.836 [84/745] Linking target lib/librte_kvargs.so.23.0 00:01:38.100 [85/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:38.100 [86/745] Generating lib/rte_pci_def with a custom command 00:01:38.100 [87/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:38.100 [88/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:38.100 [89/745] Linking static target lib/librte_meter.a 00:01:38.100 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:38.100 [91/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:38.100 [92/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:38.100 [93/745] Linking static target lib/librte_pci.a 00:01:38.100 [94/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:38.100 [95/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:38.100 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:38.366 [97/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:38.366 [98/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.366 [99/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:38.366 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:38.366 [101/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:38.366 [102/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.366 [103/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.366 [104/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:38.366 [105/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:38.366 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:38.366 [107/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:38.366 [108/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:38.366 [109/745] Linking static target lib/librte_telemetry.a 00:01:38.366 [110/745] Generating lib/rte_cmdline_def with a custom command 00:01:38.366 [111/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:38.366 [112/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:38.634 [113/745] Generating lib/rte_metrics_def with a custom command 00:01:38.634 [114/745] Generating lib/rte_metrics_mingw with a custom command 00:01:38.634 [115/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:38.634 [116/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:38.634 [117/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:38.634 [118/745] Generating lib/rte_hash_def with a custom command 00:01:38.634 [119/745] Generating lib/rte_hash_mingw with a custom command 00:01:38.634 [120/745] Generating lib/rte_timer_def with a custom command 00:01:38.634 [121/745] Generating lib/rte_timer_mingw with a custom command 00:01:38.634 [122/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:38.634 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:38.897 [124/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:38.897 [125/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:38.897 [126/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:38.897 [127/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:38.897 [128/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:38.897 [129/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:38.897 [130/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:38.897 [131/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:38.897 [132/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:38.897 [133/745] Generating lib/rte_acl_def with a custom command 00:01:38.897 [134/745] Generating lib/rte_acl_mingw with a custom command 00:01:38.897 [135/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:38.897 [136/745] Generating lib/rte_bbdev_def with a custom command 00:01:38.897 [137/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:38.897 [138/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:38.897 [139/745] Generating lib/rte_bitratestats_def with a custom command 00:01:38.897 [140/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:38.897 [141/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:38.897 [142/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:39.159 [143/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:39.159 [144/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:39.159 [145/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.159 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:39.159 [147/745] Linking target lib/librte_telemetry.so.23.0 00:01:39.159 [148/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:39.159 [149/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:39.159 [150/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:39.159 [151/745] Generating lib/rte_bpf_def with a custom command 00:01:39.159 [152/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:39.159 [153/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:39.159 [154/745] Generating lib/rte_cfgfile_def with a custom command 00:01:39.159 [155/745] Generating lib/rte_bpf_mingw with a custom command 00:01:39.159 [156/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:39.159 [157/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:39.420 [158/745] Generating lib/rte_compressdev_def with a custom command 00:01:39.420 [159/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:39.420 [160/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:39.420 [161/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:39.420 [162/745] Generating lib/rte_cryptodev_def with a custom command 00:01:39.420 [163/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:39.420 [164/745] Linking static target lib/librte_rcu.a 00:01:39.420 [165/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:39.420 [166/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:39.420 [167/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:39.421 [168/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:39.421 [169/745] Linking static target lib/librte_timer.a 00:01:39.421 [170/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:39.421 [171/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:39.421 [172/745] Linking static target lib/librte_net.a 00:01:39.421 [173/745] Linking static target lib/librte_cmdline.a 00:01:39.421 [174/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:39.421 [175/745] Generating lib/rte_distributor_def with a custom command 00:01:39.421 [176/745] Generating lib/rte_distributor_mingw with a custom command 00:01:39.683 [177/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:39.683 [178/745] Generating lib/rte_efd_def with a custom command 00:01:39.683 [179/745] Generating lib/rte_efd_mingw with a custom command 00:01:39.683 [180/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:39.683 [181/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:39.683 [182/745] Linking static target lib/librte_mempool.a 00:01:39.683 [183/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:39.683 [184/745] Linking static target lib/librte_metrics.a 00:01:39.683 [185/745] Linking static target lib/librte_cfgfile.a 00:01:39.949 [186/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.949 [187/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.949 [188/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.949 [189/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:39.949 [190/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:39.949 [191/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:40.214 [192/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:40.214 [193/745] Linking static target lib/librte_eal.a 00:01:40.214 [194/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:40.214 [195/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:40.214 [196/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:40.214 [197/745] Generating lib/rte_eventdev_def with a custom command 00:01:40.214 [198/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:40.214 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:40.214 [200/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.214 [201/745] Linking static target lib/librte_bitratestats.a 00:01:40.214 [202/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:40.214 [203/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:40.214 [204/745] Generating lib/rte_gpudev_def with a custom command 00:01:40.214 [205/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.214 [206/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:40.481 [207/745] Generating lib/rte_gro_def with a custom command 00:01:40.481 [208/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:40.481 [209/745] Generating lib/rte_gro_mingw with a custom command 00:01:40.481 [210/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:40.481 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:40.481 [212/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:40.481 [213/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.746 [214/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:40.746 [215/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:40.746 [216/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:40.746 [217/745] Generating lib/rte_gso_def with a custom command 00:01:40.746 [218/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:40.746 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:40.746 [220/745] Generating lib/rte_gso_mingw with a custom command 00:01:40.746 [221/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:40.747 [222/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:40.747 [223/745] Linking static target lib/librte_bbdev.a 00:01:40.747 [224/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:40.747 [225/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.006 [226/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:41.006 [227/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.006 [228/745] Generating lib/rte_ip_frag_def with a custom command 00:01:41.006 [229/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:41.006 [230/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:41.006 [231/745] Generating lib/rte_jobstats_def with a custom command 00:01:41.006 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:41.006 [233/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:41.006 [234/745] Generating lib/rte_latencystats_def with a custom command 00:01:41.006 [235/745] Linking static target lib/librte_compressdev.a 00:01:41.006 [236/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:41.006 [237/745] Generating lib/rte_lpm_def with a custom command 00:01:41.006 [238/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:41.006 [239/745] Generating lib/rte_lpm_mingw with a custom command 00:01:41.273 [240/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:41.273 [241/745] Linking static target lib/librte_jobstats.a 00:01:41.273 [242/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:41.273 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:41.273 [244/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:41.534 [245/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:41.534 [246/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:41.534 [247/745] Linking static target lib/librte_distributor.a 00:01:41.534 [248/745] Generating lib/rte_member_def with a custom command 00:01:41.534 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:41.796 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:41.796 [251/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:41.796 [252/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.796 [253/745] Linking static target lib/librte_bpf.a 00:01:41.796 [254/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:41.796 [255/745] Generating lib/rte_pcapng_def with a custom command 00:01:41.796 [256/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:41.796 [257/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.796 [258/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:41.796 [259/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:41.796 [260/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:41.796 [261/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:41.796 [262/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:41.796 [263/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.061 [264/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:42.061 [265/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:42.061 [266/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:42.061 [267/745] Generating lib/rte_power_def with a custom command 00:01:42.061 [268/745] Generating lib/rte_power_mingw with a custom command 00:01:42.061 [269/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:42.061 [270/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:42.061 [271/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:42.061 [272/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:42.061 [273/745] Linking static target lib/librte_gro.a 00:01:42.061 [274/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:42.061 [275/745] Linking static target lib/librte_gpudev.a 00:01:42.061 [276/745] Generating lib/rte_rawdev_def with a custom command 00:01:42.061 [277/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:42.061 [278/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:42.061 [279/745] Generating lib/rte_regexdev_def with a custom command 00:01:42.061 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:42.061 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:42.061 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:42.061 [283/745] Generating lib/rte_rib_def with a custom command 00:01:42.061 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:42.326 [285/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:42.326 [286/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.326 [287/745] Generating lib/rte_reorder_def with a custom command 00:01:42.326 [288/745] Generating lib/rte_reorder_mingw with a custom command 00:01:42.326 [289/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:42.326 [290/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:42.326 [291/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.326 [292/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.593 [293/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:42.593 [294/745] Generating lib/rte_sched_def with a custom command 00:01:42.593 [295/745] Generating lib/rte_sched_mingw with a custom command 00:01:42.593 [296/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:42.593 [297/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:42.593 [298/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:42.593 [299/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:42.593 [300/745] Linking static target lib/librte_latencystats.a 00:01:42.593 [301/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:42.593 [302/745] Generating lib/rte_security_def with a custom command 00:01:42.593 [303/745] Generating lib/rte_security_mingw with a custom command 00:01:42.593 [304/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:42.593 [305/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:42.593 [306/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:42.593 [307/745] Generating lib/rte_stack_def with a custom command 00:01:42.593 [308/745] Generating lib/rte_stack_mingw with a custom command 00:01:42.593 [309/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:42.593 [310/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:42.593 [311/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:42.593 [312/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:42.593 [313/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:42.593 [314/745] Linking static target lib/librte_rawdev.a 00:01:42.593 [315/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:42.593 [316/745] Linking static target lib/librte_stack.a 00:01:42.593 [317/745] Generating lib/rte_vhost_def with a custom command 00:01:42.593 [318/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:42.855 [319/745] Generating lib/rte_vhost_mingw with a custom command 00:01:42.855 [320/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:42.855 [321/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:42.855 [322/745] Linking static target lib/librte_dmadev.a 00:01:42.855 [323/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.855 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:42.855 [325/745] Linking static target lib/librte_ip_frag.a 00:01:43.126 [326/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:43.126 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:43.126 [328/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:43.126 [329/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.126 [330/745] Generating lib/rte_ipsec_def with a custom command 00:01:43.126 [331/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:43.126 [332/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:43.394 [333/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:43.394 [334/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:43.394 [335/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.394 [336/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.394 [337/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:43.394 [338/745] Generating lib/rte_fib_def with a custom command 00:01:43.394 [339/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.394 [340/745] Generating lib/rte_fib_mingw with a custom command 00:01:43.394 [341/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:43.394 [342/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:43.394 [343/745] Linking static target lib/librte_gso.a 00:01:43.658 [344/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:43.658 [345/745] Linking static target lib/librte_regexdev.a 00:01:43.658 [346/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:43.658 [347/745] Linking static target lib/librte_efd.a 00:01:43.926 [348/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.926 [349/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:43.926 [350/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.926 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:43.926 [352/745] Linking static target lib/librte_pcapng.a 00:01:43.926 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:43.926 [354/745] Linking static target lib/librte_lpm.a 00:01:43.926 [355/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:43.926 [356/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:43.926 [357/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:44.205 [358/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:44.205 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:44.205 [360/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.205 [361/745] Linking static target lib/librte_reorder.a 00:01:44.205 [362/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:44.205 [363/745] Generating lib/rte_port_def with a custom command 00:01:44.205 [364/745] Generating lib/rte_port_mingw with a custom command 00:01:44.474 [365/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:44.474 [366/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:44.474 [367/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:44.474 [368/745] Linking static target lib/acl/libavx2_tmp.a 00:01:44.474 [369/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:44.474 [370/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:44.474 [371/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:44.474 [372/745] Generating lib/rte_pdump_def with a custom command 00:01:44.474 [373/745] Generating lib/rte_pdump_mingw with a custom command 00:01:44.474 [374/745] Linking static target lib/librte_security.a 00:01:44.474 [375/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:44.475 [376/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.475 [377/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:44.475 [378/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:44.475 [379/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:44.475 [380/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:44.475 [381/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.475 [382/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.475 [383/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:44.475 [384/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:44.475 [385/745] Linking static target lib/librte_hash.a 00:01:44.737 [386/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:44.737 [387/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:44.737 [388/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.737 [389/745] Linking static target lib/librte_power.a 00:01:44.737 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:44.737 [391/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:44.737 [392/745] Linking static target lib/librte_rib.a 00:01:44.737 [393/745] Linking static target lib/acl/libavx512_tmp.a 00:01:44.737 [394/745] Linking static target lib/librte_acl.a 00:01:45.006 [395/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:45.006 [396/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:45.006 [397/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:45.006 [398/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.268 [399/745] Generating lib/rte_table_def with a custom command 00:01:45.268 [400/745] Generating lib/rte_table_mingw with a custom command 00:01:45.268 [401/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:45.268 [402/745] Linking static target lib/librte_ethdev.a 00:01:45.268 [403/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.533 [404/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:45.533 [405/745] Linking static target lib/librte_mbuf.a 00:01:45.533 [406/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.533 [407/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:45.533 [408/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:45.533 [409/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.802 [410/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:45.802 [411/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:45.802 [412/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:45.802 [413/745] Generating lib/rte_pipeline_def with a custom command 00:01:45.802 [414/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:45.802 [415/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:45.802 [416/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:45.802 [417/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:45.802 [418/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:45.802 [419/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:45.802 [420/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:45.802 [421/745] Generating lib/rte_graph_def with a custom command 00:01:45.802 [422/745] Generating lib/rte_graph_mingw with a custom command 00:01:45.802 [423/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:45.802 [424/745] Linking static target lib/librte_fib.a 00:01:46.068 [425/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:46.068 [426/745] Linking static target lib/librte_member.a 00:01:46.068 [427/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:46.068 [428/745] Linking static target lib/librte_eventdev.a 00:01:46.068 [429/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:46.068 [430/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:46.068 [431/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:46.343 [432/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:46.343 [433/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:46.343 [434/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:46.343 [435/745] Generating lib/rte_node_def with a custom command 00:01:46.343 [436/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.343 [437/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:46.343 [438/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:46.343 [439/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.343 [440/745] Generating lib/rte_node_mingw with a custom command 00:01:46.343 [441/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:46.343 [442/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.608 [443/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:46.608 [444/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:46.608 [445/745] Linking static target lib/librte_cryptodev.a 00:01:46.608 [446/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:46.608 [447/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:46.608 [448/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:46.608 [449/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.608 [450/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:46.608 [451/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:46.608 [452/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:46.608 [453/745] Linking static target lib/librte_sched.a 00:01:46.608 [454/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:46.608 [455/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:46.608 [456/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:46.608 [457/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:46.608 [458/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:46.608 [459/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:46.873 [460/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:46.873 [461/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:46.873 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:46.873 [463/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:46.873 [464/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:46.873 [465/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:46.873 [466/745] Linking static target lib/librte_pdump.a 00:01:46.873 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:46.873 [468/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:47.144 [469/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:47.144 [470/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:47.144 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:47.144 [472/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:47.144 [473/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:47.144 [474/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:47.144 [475/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:47.144 [476/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:47.410 [477/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:47.410 [478/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:47.410 [479/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.410 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:47.410 [481/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:47.410 [482/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:47.410 [483/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:47.410 [484/745] Linking static target lib/librte_table.a 00:01:47.410 [485/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.410 [486/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:47.410 [487/745] Linking static target lib/librte_ipsec.a 00:01:47.676 [488/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:47.676 [489/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:47.676 [490/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.676 [491/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.676 [492/745] Linking static target drivers/librte_bus_vdev.a 00:01:47.676 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:47.676 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:47.944 [495/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:47.944 [496/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:47.944 [497/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:47.944 [498/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.213 [499/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:48.213 [500/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.213 [501/745] Linking static target lib/librte_graph.a 00:01:48.213 [502/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:48.213 [503/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:48.213 [504/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:48.213 [505/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:48.213 [506/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:48.213 [507/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:48.213 [508/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.213 [509/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.213 [510/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:48.214 [511/745] Linking static target drivers/librte_bus_pci.a 00:01:48.214 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:48.477 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:48.477 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.746 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:49.013 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.013 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:49.013 [518/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.013 [519/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:49.013 [520/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:49.275 [521/745] Linking static target lib/librte_port.a 00:01:49.275 [522/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:49.275 [523/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:49.275 [524/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:49.275 [525/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:49.275 [526/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:49.542 [527/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:49.542 [528/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.542 [529/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:49.808 [530/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:49.808 [531/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:49.808 [532/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.808 [533/745] Linking static target drivers/librte_mempool_ring.a 00:01:49.808 [534/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:49.808 [535/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.808 [536/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:49.808 [537/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:49.808 [538/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:50.075 [539/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.075 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:50.075 [541/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.340 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:50.340 [543/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:50.606 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:50.606 [545/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:50.606 [546/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:50.606 [547/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:50.874 [548/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:50.874 [549/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:50.874 [550/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:50.874 [551/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:51.141 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:51.141 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:51.410 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:51.410 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:51.410 [556/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:51.410 [557/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:51.678 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:51.678 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:51.942 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:51.942 [561/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:51.942 [562/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:51.942 [563/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:51.942 [564/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:51.942 [565/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:52.208 [566/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:52.208 [567/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:52.208 [568/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:52.208 [569/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:52.208 [570/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:52.472 [571/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:52.472 [572/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:52.472 [573/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:52.737 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:52.737 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:52.737 [576/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:52.737 [577/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:52.737 [578/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:52.737 [579/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:52.737 [580/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:53.003 [581/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:53.003 [582/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:53.003 [583/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.003 [584/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:53.003 [585/745] Linking target lib/librte_eal.so.23.0 00:01:53.003 [586/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:53.273 [587/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:53.273 [588/745] Linking target lib/librte_ring.so.23.0 00:01:53.273 [589/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:53.539 [590/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.539 [591/745] Linking target lib/librte_meter.so.23.0 00:01:53.539 [592/745] Linking target lib/librte_pci.so.23.0 00:01:53.539 [593/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:53.539 [594/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:53.539 [595/745] Linking target lib/librte_rcu.so.23.0 00:01:53.806 [596/745] Linking target lib/librte_mempool.so.23.0 00:01:53.806 [597/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:53.806 [598/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:53.806 [599/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:53.806 [600/745] Linking target lib/librte_timer.so.23.0 00:01:53.806 [601/745] Linking target lib/librte_acl.so.23.0 00:01:53.806 [602/745] Linking target lib/librte_cfgfile.so.23.0 00:01:53.806 [603/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:53.806 [604/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:53.806 [605/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:53.806 [606/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:53.806 [607/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:53.806 [608/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:53.806 [609/745] Linking target lib/librte_jobstats.so.23.0 00:01:53.806 [610/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:53.806 [611/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:53.806 [612/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:53.806 [613/745] Linking target lib/librte_dmadev.so.23.0 00:01:53.806 [614/745] Linking target lib/librte_rawdev.so.23.0 00:01:54.073 [615/745] Linking target lib/librte_stack.so.23.0 00:01:54.073 [616/745] Linking target lib/librte_graph.so.23.0 00:01:54.073 [617/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:54.073 [618/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:54.073 [619/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:54.073 [620/745] Linking target lib/librte_rib.so.23.0 00:01:54.073 [621/745] Linking target lib/librte_mbuf.so.23.0 00:01:54.073 [622/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:54.073 [623/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:54.073 [624/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:54.073 [625/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:54.073 [626/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:54.334 [627/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:54.334 [628/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:54.334 [629/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:54.334 [630/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:54.334 [631/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:54.334 [632/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:54.334 [633/745] Linking target lib/librte_reorder.so.23.0 00:01:54.334 [634/745] Linking target lib/librte_distributor.so.23.0 00:01:54.334 [635/745] Linking target lib/librte_compressdev.so.23.0 00:01:54.334 [636/745] Linking target lib/librte_gpudev.so.23.0 00:01:54.334 [637/745] Linking target lib/librte_bbdev.so.23.0 00:01:54.334 [638/745] Linking target lib/librte_sched.so.23.0 00:01:54.334 [639/745] Linking target lib/librte_net.so.23.0 00:01:54.334 [640/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:54.334 [641/745] Linking target lib/librte_fib.so.23.0 00:01:54.334 [642/745] Linking target lib/librte_regexdev.so.23.0 00:01:54.334 [643/745] Linking target lib/librte_cryptodev.so.23.0 00:01:54.334 [644/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:54.334 [645/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:54.334 [646/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:54.334 [647/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:54.595 [648/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:54.595 [649/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:54.595 [650/745] Linking target lib/librte_hash.so.23.0 00:01:54.595 [651/745] Linking target lib/librte_ethdev.so.23.0 00:01:54.595 [652/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:54.595 [653/745] Linking target lib/librte_cmdline.so.23.0 00:01:54.595 [654/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:54.595 [655/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:54.595 [656/745] Linking target lib/librte_security.so.23.0 00:01:54.595 [657/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:54.595 [658/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:54.595 [659/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:54.595 [660/745] Linking target lib/librte_efd.so.23.0 00:01:54.595 [661/745] Linking target lib/librte_member.so.23.0 00:01:54.856 [662/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:54.856 [663/745] Linking target lib/librte_lpm.so.23.0 00:01:54.856 [664/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:54.856 [665/745] Linking target lib/librte_gro.so.23.0 00:01:54.856 [666/745] Linking target lib/librte_ip_frag.so.23.0 00:01:54.856 [667/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:54.856 [668/745] Linking target lib/librte_metrics.so.23.0 00:01:54.856 [669/745] Linking target lib/librte_pcapng.so.23.0 00:01:54.856 [670/745] Linking target lib/librte_gso.so.23.0 00:01:54.856 [671/745] Linking target lib/librte_power.so.23.0 00:01:54.856 [672/745] Linking target lib/librte_bpf.so.23.0 00:01:54.856 [673/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:54.856 [674/745] Linking target lib/librte_eventdev.so.23.0 00:01:54.856 [675/745] Linking target lib/librte_ipsec.so.23.0 00:01:54.856 [676/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:54.856 [677/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:54.856 [678/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:54.856 [679/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:54.856 [680/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:55.116 [681/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:55.116 [682/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:55.116 [683/745] Linking target lib/librte_latencystats.so.23.0 00:01:55.116 [684/745] Linking target lib/librte_bitratestats.so.23.0 00:01:55.116 [685/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:55.116 [686/745] Linking target lib/librte_pdump.so.23.0 00:01:55.116 [687/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:55.116 [688/745] Linking target lib/librte_port.so.23.0 00:01:55.116 [689/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:55.116 [690/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:55.116 [691/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:55.376 [692/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:55.376 [693/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:55.376 [694/745] Linking target lib/librte_table.so.23.0 00:01:55.376 [695/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:55.376 [696/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:55.944 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:55.944 [698/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:55.944 [699/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:56.202 [700/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:56.202 [701/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:56.202 [702/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:56.202 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:56.769 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:56.769 [705/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:56.769 [706/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:56.769 [707/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:56.769 [708/745] Linking static target drivers/librte_net_i40e.a 00:01:56.769 [709/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:57.028 [710/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:57.288 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.547 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:57.805 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:57.805 [714/745] Linking static target lib/librte_node.a 00:01:58.064 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.064 [716/745] Linking target lib/librte_node.so.23.0 00:01:58.323 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:58.890 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:59.826 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:06.390 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:38.470 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:38.470 [722/745] Linking static target lib/librte_vhost.a 00:02:38.730 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.730 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:53.611 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:53.611 [726/745] Linking static target lib/librte_pipeline.a 00:02:53.611 [727/745] Linking target app/dpdk-test-sad 00:02:53.611 [728/745] Linking target app/dpdk-test-regex 00:02:53.611 [729/745] Linking target app/dpdk-test-cmdline 00:02:53.611 [730/745] Linking target app/dpdk-dumpcap 00:02:53.611 [731/745] Linking target app/dpdk-test-acl 00:02:53.611 [732/745] Linking target app/dpdk-test-gpudev 00:02:53.611 [733/745] Linking target app/dpdk-test-security-perf 00:02:53.611 [734/745] Linking target app/dpdk-test-pipeline 00:02:53.611 [735/745] Linking target app/dpdk-test-fib 00:02:53.611 [736/745] Linking target app/dpdk-pdump 00:02:53.611 [737/745] Linking target app/dpdk-test-flow-perf 00:02:53.611 [738/745] Linking target app/dpdk-test-compress-perf 00:02:53.611 [739/745] Linking target app/dpdk-test-bbdev 00:02:53.611 [740/745] Linking target app/dpdk-proc-info 00:02:53.611 [741/745] Linking target app/dpdk-test-crypto-perf 00:02:53.611 [742/745] Linking target app/dpdk-test-eventdev 00:02:53.611 [743/745] Linking target app/dpdk-testpmd 00:02:53.871 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.871 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:53.871 02:44:04 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:53.871 02:44:04 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:53.871 02:44:04 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:53.871 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:54.131 [0/1] Installing files. 00:02:54.396 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:54.396 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:54.396 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:54.396 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:54.396 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:54.396 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:54.396 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:54.396 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:54.396 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:54.396 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.396 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.396 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.396 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.396 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.396 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.396 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.396 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:54.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:54.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:54.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:54.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:54.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:54.402 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.402 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.402 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.402 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.402 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.402 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.402 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.402 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.402 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.402 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.402 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.402 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.402 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.402 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.402 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.402 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.402 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.662 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.662 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.662 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.662 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.662 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.662 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.662 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.663 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.926 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.926 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.926 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.926 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.926 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:54.926 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.926 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:54.926 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.926 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:54.926 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.926 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:54.926 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.926 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.926 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.926 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.926 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.926 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.926 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.926 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.926 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.926 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.926 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.926 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.926 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.926 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.926 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.926 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.926 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:54.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:54.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:54.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:54.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:54.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:54.930 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:54.930 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:54.930 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:54.930 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:54.930 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:54.930 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:54.930 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:54.930 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:54.930 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:54.930 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:54.930 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:54.930 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:54.930 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:54.930 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:54.930 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:54.930 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:54.930 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:54.930 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:54.930 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:54.930 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:54.930 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:54.930 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:54.930 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:54.930 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:54.930 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:54.930 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:54.930 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:54.930 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:54.930 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:54.930 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:54.930 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:54.930 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:54.930 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:54.930 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:54.930 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:54.930 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:54.930 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:54.930 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:54.930 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:54.930 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:54.931 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:54.931 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:54.931 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:54.931 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:54.931 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:54.931 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:54.931 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:54.931 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:54.931 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:54.931 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:54.931 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:54.931 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:54.931 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:54.931 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:54.931 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:54.931 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:54.931 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:54.931 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:54.931 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:54.931 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:54.931 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:54.931 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:54.931 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:54.931 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:54.931 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:54.931 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:54.931 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:54.931 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:54.931 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:54.931 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:54.931 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:54.931 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:54.931 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:54.931 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:54.931 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:54.931 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:54.931 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:54.931 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:54.931 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:54.931 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:54.931 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:54.931 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:54.931 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:54.931 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:54.931 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:54.931 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:54.931 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:54.931 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:54.931 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:54.931 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:54.931 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:54.931 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:54.931 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:54.931 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:54.931 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:54.931 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:54.931 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:54.931 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:54.931 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:54.931 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:54.931 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:54.931 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:54.931 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:54.931 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:54.931 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:54.931 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:54.931 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:54.931 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:54.931 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:54.931 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:54.931 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:54.931 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:54.931 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:54.931 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:54.931 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:54.931 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:54.931 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:54.931 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:54.931 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:54.931 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:54.931 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:54.931 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:54.931 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:54.931 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:54.932 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:54.932 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:55.191 02:44:05 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:55.191 02:44:05 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:55.191 00:02:55.191 real 1m25.623s 00:02:55.191 user 14m27.995s 00:02:55.191 sys 1m54.883s 00:02:55.192 02:44:05 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:55.192 02:44:05 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:55.192 ************************************ 00:02:55.192 END TEST build_native_dpdk 00:02:55.192 ************************************ 00:02:55.192 02:44:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:55.192 02:44:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:55.192 02:44:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:55.192 02:44:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:55.192 02:44:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:55.192 02:44:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:55.192 02:44:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:55.192 02:44:05 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:55.192 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:55.192 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.192 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.452 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:55.711 Using 'verbs' RDMA provider 00:03:06.277 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:16.268 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:16.268 Creating mk/config.mk...done. 00:03:16.268 Creating mk/cc.flags.mk...done. 00:03:16.268 Type 'make' to build. 00:03:16.268 02:44:26 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:16.268 02:44:26 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:16.268 02:44:26 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:16.268 02:44:26 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.268 ************************************ 00:03:16.268 START TEST make 00:03:16.268 ************************************ 00:03:16.268 02:44:26 make -- common/autotest_common.sh@1129 -- $ make -j48 00:03:16.842 make[1]: Nothing to be done for 'all'. 00:03:18.764 The Meson build system 00:03:18.764 Version: 1.5.0 00:03:18.764 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:18.764 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:18.764 Build type: native build 00:03:18.764 Project name: libvfio-user 00:03:18.764 Project version: 0.0.1 00:03:18.764 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:18.764 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:18.764 Host machine cpu family: x86_64 00:03:18.764 Host machine cpu: x86_64 00:03:18.764 Run-time dependency threads found: YES 00:03:18.764 Library dl found: YES 00:03:18.764 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:18.764 Run-time dependency json-c found: YES 0.17 00:03:18.764 Run-time dependency cmocka found: YES 1.1.7 00:03:18.764 Program pytest-3 found: NO 00:03:18.764 Program flake8 found: NO 00:03:18.764 Program misspell-fixer found: NO 00:03:18.764 Program restructuredtext-lint found: NO 00:03:18.764 Program valgrind found: YES (/usr/bin/valgrind) 00:03:18.764 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:18.764 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:18.764 Compiler for C supports arguments -Wwrite-strings: YES 00:03:18.764 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:18.764 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:18.764 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:18.764 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:18.764 Build targets in project: 8 00:03:18.764 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:18.764 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:18.764 00:03:18.764 libvfio-user 0.0.1 00:03:18.764 00:03:18.764 User defined options 00:03:18.764 buildtype : debug 00:03:18.764 default_library: shared 00:03:18.764 libdir : /usr/local/lib 00:03:18.764 00:03:18.764 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:19.348 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:19.612 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:19.612 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:19.612 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:19.612 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:19.612 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:19.612 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:19.612 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:19.612 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:19.612 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:19.612 [10/37] Compiling C object samples/null.p/null.c.o 00:03:19.612 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:19.612 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:19.612 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:19.612 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:19.612 [15/37] Compiling C object samples/client.p/client.c.o 00:03:19.612 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:19.612 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:19.612 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:19.612 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:19.612 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:19.612 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:19.612 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:19.612 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:19.612 [24/37] Compiling C object samples/server.p/server.c.o 00:03:19.612 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:19.878 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:19.878 [27/37] Linking target samples/client 00:03:19.878 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:19.878 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:19.878 [30/37] Linking target test/unit_tests 00:03:19.878 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:20.142 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:20.142 [33/37] Linking target samples/server 00:03:20.142 [34/37] Linking target samples/shadow_ioeventfd_server 00:03:20.142 [35/37] Linking target samples/null 00:03:20.142 [36/37] Linking target samples/lspci 00:03:20.142 [37/37] Linking target samples/gpio-pci-idio-16 00:03:20.142 INFO: autodetecting backend as ninja 00:03:20.142 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:20.403 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:21.341 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:21.341 ninja: no work to do. 00:04:00.053 CC lib/ut_mock/mock.o 00:04:00.053 CC lib/log/log.o 00:04:00.053 CC lib/log/log_flags.o 00:04:00.053 CC lib/log/log_deprecated.o 00:04:00.053 CC lib/ut/ut.o 00:04:00.053 LIB libspdk_ut_mock.a 00:04:00.053 LIB libspdk_ut.a 00:04:00.053 LIB libspdk_log.a 00:04:00.053 SO libspdk_ut.so.2.0 00:04:00.053 SO libspdk_ut_mock.so.6.0 00:04:00.053 SO libspdk_log.so.7.1 00:04:00.053 SYMLINK libspdk_ut_mock.so 00:04:00.053 SYMLINK libspdk_ut.so 00:04:00.053 SYMLINK libspdk_log.so 00:04:00.053 CC lib/dma/dma.o 00:04:00.053 CC lib/ioat/ioat.o 00:04:00.053 CXX lib/trace_parser/trace.o 00:04:00.053 CC lib/util/base64.o 00:04:00.053 CC lib/util/bit_array.o 00:04:00.053 CC lib/util/cpuset.o 00:04:00.053 CC lib/util/crc16.o 00:04:00.053 CC lib/util/crc32.o 00:04:00.053 CC lib/util/crc32c.o 00:04:00.053 CC lib/util/crc32_ieee.o 00:04:00.053 CC lib/util/crc64.o 00:04:00.053 CC lib/util/dif.o 00:04:00.053 CC lib/util/fd.o 00:04:00.053 CC lib/util/fd_group.o 00:04:00.053 CC lib/util/file.o 00:04:00.053 CC lib/util/hexlify.o 00:04:00.053 CC lib/util/iov.o 00:04:00.053 CC lib/util/math.o 00:04:00.053 CC lib/util/net.o 00:04:00.053 CC lib/util/pipe.o 00:04:00.053 CC lib/util/strerror_tls.o 00:04:00.053 CC lib/util/string.o 00:04:00.053 CC lib/util/uuid.o 00:04:00.053 CC lib/util/zipf.o 00:04:00.053 CC lib/util/xor.o 00:04:00.053 CC lib/util/md5.o 00:04:00.053 CC lib/vfio_user/host/vfio_user_pci.o 00:04:00.053 CC lib/vfio_user/host/vfio_user.o 00:04:00.053 LIB libspdk_dma.a 00:04:00.053 SO libspdk_dma.so.5.0 00:04:00.053 SYMLINK libspdk_dma.so 00:04:00.053 LIB libspdk_ioat.a 00:04:00.053 SO libspdk_ioat.so.7.0 00:04:00.053 SYMLINK libspdk_ioat.so 00:04:00.053 LIB libspdk_vfio_user.a 00:04:00.053 SO libspdk_vfio_user.so.5.0 00:04:00.053 SYMLINK libspdk_vfio_user.so 00:04:00.053 LIB libspdk_util.a 00:04:00.053 SO libspdk_util.so.10.1 00:04:00.053 SYMLINK libspdk_util.so 00:04:00.053 CC lib/conf/conf.o 00:04:00.053 CC lib/rdma_utils/rdma_utils.o 00:04:00.053 CC lib/idxd/idxd.o 00:04:00.053 CC lib/vmd/vmd.o 00:04:00.053 CC lib/json/json_parse.o 00:04:00.053 CC lib/idxd/idxd_user.o 00:04:00.053 CC lib/env_dpdk/env.o 00:04:00.053 CC lib/vmd/led.o 00:04:00.053 CC lib/json/json_util.o 00:04:00.053 CC lib/idxd/idxd_kernel.o 00:04:00.053 CC lib/env_dpdk/memory.o 00:04:00.053 CC lib/json/json_write.o 00:04:00.053 CC lib/env_dpdk/pci.o 00:04:00.053 CC lib/env_dpdk/init.o 00:04:00.053 CC lib/env_dpdk/threads.o 00:04:00.053 CC lib/env_dpdk/pci_ioat.o 00:04:00.053 CC lib/env_dpdk/pci_virtio.o 00:04:00.053 CC lib/env_dpdk/pci_vmd.o 00:04:00.053 CC lib/env_dpdk/pci_idxd.o 00:04:00.053 CC lib/env_dpdk/pci_event.o 00:04:00.053 CC lib/env_dpdk/sigbus_handler.o 00:04:00.053 CC lib/env_dpdk/pci_dpdk.o 00:04:00.053 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:00.053 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:00.053 LIB libspdk_rdma_utils.a 00:04:00.053 LIB libspdk_json.a 00:04:00.053 SO libspdk_rdma_utils.so.1.0 00:04:00.053 LIB libspdk_conf.a 00:04:00.053 SO libspdk_conf.so.6.0 00:04:00.053 SO libspdk_json.so.6.0 00:04:00.053 SYMLINK libspdk_rdma_utils.so 00:04:00.053 SYMLINK libspdk_conf.so 00:04:00.053 SYMLINK libspdk_json.so 00:04:00.053 CC lib/rdma_provider/common.o 00:04:00.053 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:00.053 CC lib/jsonrpc/jsonrpc_server.o 00:04:00.053 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:00.053 CC lib/jsonrpc/jsonrpc_client.o 00:04:00.053 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:00.053 LIB libspdk_idxd.a 00:04:00.053 SO libspdk_idxd.so.12.1 00:04:00.053 LIB libspdk_vmd.a 00:04:00.053 SYMLINK libspdk_idxd.so 00:04:00.053 SO libspdk_vmd.so.6.0 00:04:00.053 SYMLINK libspdk_vmd.so 00:04:00.053 LIB libspdk_rdma_provider.a 00:04:00.053 SO libspdk_rdma_provider.so.7.0 00:04:00.053 LIB libspdk_jsonrpc.a 00:04:00.053 SYMLINK libspdk_rdma_provider.so 00:04:00.053 SO libspdk_jsonrpc.so.6.0 00:04:00.053 LIB libspdk_trace_parser.a 00:04:00.053 SO libspdk_trace_parser.so.6.0 00:04:00.053 SYMLINK libspdk_jsonrpc.so 00:04:00.053 SYMLINK libspdk_trace_parser.so 00:04:00.053 CC lib/rpc/rpc.o 00:04:00.314 LIB libspdk_rpc.a 00:04:00.314 SO libspdk_rpc.so.6.0 00:04:00.314 SYMLINK libspdk_rpc.so 00:04:00.573 CC lib/keyring/keyring.o 00:04:00.573 CC lib/keyring/keyring_rpc.o 00:04:00.573 CC lib/notify/notify.o 00:04:00.573 CC lib/trace/trace.o 00:04:00.573 CC lib/notify/notify_rpc.o 00:04:00.573 CC lib/trace/trace_flags.o 00:04:00.573 CC lib/trace/trace_rpc.o 00:04:00.573 LIB libspdk_notify.a 00:04:00.573 SO libspdk_notify.so.6.0 00:04:00.833 SYMLINK libspdk_notify.so 00:04:00.833 LIB libspdk_keyring.a 00:04:00.833 LIB libspdk_trace.a 00:04:00.833 SO libspdk_keyring.so.2.0 00:04:00.833 SO libspdk_trace.so.11.0 00:04:00.833 SYMLINK libspdk_keyring.so 00:04:00.833 SYMLINK libspdk_trace.so 00:04:01.091 CC lib/sock/sock.o 00:04:01.091 CC lib/sock/sock_rpc.o 00:04:01.091 CC lib/thread/thread.o 00:04:01.091 CC lib/thread/iobuf.o 00:04:01.092 LIB libspdk_env_dpdk.a 00:04:01.092 SO libspdk_env_dpdk.so.15.1 00:04:01.092 SYMLINK libspdk_env_dpdk.so 00:04:01.352 LIB libspdk_sock.a 00:04:01.352 SO libspdk_sock.so.10.0 00:04:01.352 SYMLINK libspdk_sock.so 00:04:01.611 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:01.611 CC lib/nvme/nvme_ctrlr.o 00:04:01.611 CC lib/nvme/nvme_fabric.o 00:04:01.611 CC lib/nvme/nvme_ns_cmd.o 00:04:01.611 CC lib/nvme/nvme_ns.o 00:04:01.611 CC lib/nvme/nvme_pcie_common.o 00:04:01.612 CC lib/nvme/nvme_pcie.o 00:04:01.612 CC lib/nvme/nvme_qpair.o 00:04:01.612 CC lib/nvme/nvme.o 00:04:01.612 CC lib/nvme/nvme_quirks.o 00:04:01.612 CC lib/nvme/nvme_transport.o 00:04:01.612 CC lib/nvme/nvme_discovery.o 00:04:01.612 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:01.612 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:01.612 CC lib/nvme/nvme_tcp.o 00:04:01.612 CC lib/nvme/nvme_opal.o 00:04:01.612 CC lib/nvme/nvme_io_msg.o 00:04:01.612 CC lib/nvme/nvme_poll_group.o 00:04:01.612 CC lib/nvme/nvme_zns.o 00:04:01.612 CC lib/nvme/nvme_stubs.o 00:04:01.612 CC lib/nvme/nvme_auth.o 00:04:01.612 CC lib/nvme/nvme_cuse.o 00:04:01.612 CC lib/nvme/nvme_vfio_user.o 00:04:01.612 CC lib/nvme/nvme_rdma.o 00:04:02.550 LIB libspdk_thread.a 00:04:02.550 SO libspdk_thread.so.11.0 00:04:02.810 SYMLINK libspdk_thread.so 00:04:02.810 CC lib/accel/accel.o 00:04:02.810 CC lib/accel/accel_rpc.o 00:04:02.810 CC lib/accel/accel_sw.o 00:04:02.810 CC lib/fsdev/fsdev.o 00:04:02.810 CC lib/fsdev/fsdev_io.o 00:04:02.810 CC lib/fsdev/fsdev_rpc.o 00:04:02.810 CC lib/blob/blobstore.o 00:04:02.810 CC lib/virtio/virtio.o 00:04:02.810 CC lib/blob/request.o 00:04:02.810 CC lib/virtio/virtio_vhost_user.o 00:04:02.810 CC lib/blob/zeroes.o 00:04:02.810 CC lib/virtio/virtio_vfio_user.o 00:04:02.810 CC lib/virtio/virtio_pci.o 00:04:02.810 CC lib/blob/blob_bs_dev.o 00:04:02.810 CC lib/vfu_tgt/tgt_endpoint.o 00:04:02.810 CC lib/vfu_tgt/tgt_rpc.o 00:04:02.810 CC lib/init/json_config.o 00:04:02.810 CC lib/init/subsystem.o 00:04:02.810 CC lib/init/subsystem_rpc.o 00:04:02.810 CC lib/init/rpc.o 00:04:03.069 LIB libspdk_init.a 00:04:03.330 SO libspdk_init.so.6.0 00:04:03.330 LIB libspdk_virtio.a 00:04:03.330 SYMLINK libspdk_init.so 00:04:03.330 LIB libspdk_vfu_tgt.a 00:04:03.330 SO libspdk_virtio.so.7.0 00:04:03.330 SO libspdk_vfu_tgt.so.3.0 00:04:03.330 SYMLINK libspdk_virtio.so 00:04:03.330 SYMLINK libspdk_vfu_tgt.so 00:04:03.330 CC lib/event/app.o 00:04:03.330 CC lib/event/reactor.o 00:04:03.330 CC lib/event/log_rpc.o 00:04:03.330 CC lib/event/app_rpc.o 00:04:03.330 CC lib/event/scheduler_static.o 00:04:03.590 LIB libspdk_fsdev.a 00:04:03.590 SO libspdk_fsdev.so.2.0 00:04:03.590 SYMLINK libspdk_fsdev.so 00:04:03.850 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:03.850 LIB libspdk_event.a 00:04:03.850 SO libspdk_event.so.14.0 00:04:04.109 SYMLINK libspdk_event.so 00:04:04.109 LIB libspdk_nvme.a 00:04:04.109 LIB libspdk_accel.a 00:04:04.109 SO libspdk_accel.so.16.0 00:04:04.109 SYMLINK libspdk_accel.so 00:04:04.368 SO libspdk_nvme.so.15.0 00:04:04.368 CC lib/bdev/bdev.o 00:04:04.368 CC lib/bdev/bdev_rpc.o 00:04:04.368 CC lib/bdev/bdev_zone.o 00:04:04.368 CC lib/bdev/part.o 00:04:04.368 CC lib/bdev/scsi_nvme.o 00:04:04.368 SYMLINK libspdk_nvme.so 00:04:04.628 LIB libspdk_fuse_dispatcher.a 00:04:04.628 SO libspdk_fuse_dispatcher.so.1.0 00:04:04.628 SYMLINK libspdk_fuse_dispatcher.so 00:04:06.007 LIB libspdk_blob.a 00:04:06.007 SO libspdk_blob.so.11.0 00:04:06.007 SYMLINK libspdk_blob.so 00:04:06.266 CC lib/blobfs/blobfs.o 00:04:06.266 CC lib/blobfs/tree.o 00:04:06.266 CC lib/lvol/lvol.o 00:04:07.210 LIB libspdk_blobfs.a 00:04:07.210 LIB libspdk_bdev.a 00:04:07.210 SO libspdk_blobfs.so.10.0 00:04:07.210 SO libspdk_bdev.so.17.0 00:04:07.210 LIB libspdk_lvol.a 00:04:07.210 SYMLINK libspdk_blobfs.so 00:04:07.210 SO libspdk_lvol.so.10.0 00:04:07.210 SYMLINK libspdk_bdev.so 00:04:07.210 SYMLINK libspdk_lvol.so 00:04:07.210 CC lib/scsi/dev.o 00:04:07.210 CC lib/nvmf/ctrlr.o 00:04:07.210 CC lib/scsi/lun.o 00:04:07.210 CC lib/nvmf/ctrlr_discovery.o 00:04:07.210 CC lib/scsi/port.o 00:04:07.210 CC lib/nvmf/ctrlr_bdev.o 00:04:07.210 CC lib/scsi/scsi.o 00:04:07.210 CC lib/nvmf/subsystem.o 00:04:07.210 CC lib/ftl/ftl_core.o 00:04:07.210 CC lib/scsi/scsi_bdev.o 00:04:07.210 CC lib/nvmf/nvmf.o 00:04:07.210 CC lib/scsi/scsi_pr.o 00:04:07.210 CC lib/ftl/ftl_init.o 00:04:07.210 CC lib/nvmf/nvmf_rpc.o 00:04:07.210 CC lib/nbd/nbd.o 00:04:07.210 CC lib/scsi/scsi_rpc.o 00:04:07.210 CC lib/nvmf/transport.o 00:04:07.210 CC lib/ftl/ftl_layout.o 00:04:07.210 CC lib/ublk/ublk.o 00:04:07.210 CC lib/scsi/task.o 00:04:07.210 CC lib/nbd/nbd_rpc.o 00:04:07.210 CC lib/nvmf/tcp.o 00:04:07.210 CC lib/ublk/ublk_rpc.o 00:04:07.210 CC lib/ftl/ftl_debug.o 00:04:07.210 CC lib/nvmf/mdns_server.o 00:04:07.210 CC lib/ftl/ftl_io.o 00:04:07.210 CC lib/nvmf/stubs.o 00:04:07.210 CC lib/ftl/ftl_sb.o 00:04:07.210 CC lib/nvmf/vfio_user.o 00:04:07.210 CC lib/nvmf/rdma.o 00:04:07.210 CC lib/ftl/ftl_l2p_flat.o 00:04:07.210 CC lib/ftl/ftl_l2p.o 00:04:07.210 CC lib/ftl/ftl_nv_cache.o 00:04:07.210 CC lib/nvmf/auth.o 00:04:07.210 CC lib/ftl/ftl_band.o 00:04:07.210 CC lib/ftl/ftl_band_ops.o 00:04:07.210 CC lib/ftl/ftl_writer.o 00:04:07.210 CC lib/ftl/ftl_rq.o 00:04:07.210 CC lib/ftl/ftl_reloc.o 00:04:07.210 CC lib/ftl/ftl_l2p_cache.o 00:04:07.210 CC lib/ftl/ftl_p2l.o 00:04:07.210 CC lib/ftl/ftl_p2l_log.o 00:04:07.210 CC lib/ftl/mngt/ftl_mngt.o 00:04:07.210 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:07.210 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:07.210 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:07.210 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:07.210 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:07.792 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:07.792 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:07.792 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:07.792 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:07.792 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:07.792 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:07.792 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:07.792 CC lib/ftl/utils/ftl_conf.o 00:04:07.792 CC lib/ftl/utils/ftl_md.o 00:04:07.792 CC lib/ftl/utils/ftl_mempool.o 00:04:07.792 CC lib/ftl/utils/ftl_bitmap.o 00:04:07.792 CC lib/ftl/utils/ftl_property.o 00:04:07.792 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:07.792 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:07.792 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:07.792 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:07.792 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:07.792 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:08.053 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:08.053 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:08.053 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:08.053 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:08.053 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:08.053 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:08.053 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:08.053 CC lib/ftl/base/ftl_base_dev.o 00:04:08.054 CC lib/ftl/base/ftl_base_bdev.o 00:04:08.054 CC lib/ftl/ftl_trace.o 00:04:08.313 LIB libspdk_nbd.a 00:04:08.313 SO libspdk_nbd.so.7.0 00:04:08.313 SYMLINK libspdk_nbd.so 00:04:08.313 LIB libspdk_scsi.a 00:04:08.313 SO libspdk_scsi.so.9.0 00:04:08.313 LIB libspdk_ublk.a 00:04:08.573 SO libspdk_ublk.so.3.0 00:04:08.573 SYMLINK libspdk_scsi.so 00:04:08.573 SYMLINK libspdk_ublk.so 00:04:08.573 CC lib/iscsi/conn.o 00:04:08.573 CC lib/vhost/vhost.o 00:04:08.573 CC lib/iscsi/init_grp.o 00:04:08.573 CC lib/iscsi/iscsi.o 00:04:08.573 CC lib/vhost/vhost_rpc.o 00:04:08.573 CC lib/iscsi/param.o 00:04:08.573 CC lib/vhost/vhost_scsi.o 00:04:08.573 CC lib/iscsi/portal_grp.o 00:04:08.573 CC lib/vhost/vhost_blk.o 00:04:08.573 CC lib/iscsi/tgt_node.o 00:04:08.573 CC lib/vhost/rte_vhost_user.o 00:04:08.573 CC lib/iscsi/iscsi_subsystem.o 00:04:08.573 CC lib/iscsi/iscsi_rpc.o 00:04:08.573 CC lib/iscsi/task.o 00:04:08.831 LIB libspdk_ftl.a 00:04:09.090 SO libspdk_ftl.so.9.0 00:04:09.349 SYMLINK libspdk_ftl.so 00:04:09.918 LIB libspdk_vhost.a 00:04:09.918 SO libspdk_vhost.so.8.0 00:04:09.918 LIB libspdk_nvmf.a 00:04:10.178 SYMLINK libspdk_vhost.so 00:04:10.178 SO libspdk_nvmf.so.20.0 00:04:10.178 LIB libspdk_iscsi.a 00:04:10.178 SO libspdk_iscsi.so.8.0 00:04:10.178 SYMLINK libspdk_nvmf.so 00:04:10.438 SYMLINK libspdk_iscsi.so 00:04:10.698 CC module/env_dpdk/env_dpdk_rpc.o 00:04:10.698 CC module/vfu_device/vfu_virtio.o 00:04:10.698 CC module/vfu_device/vfu_virtio_blk.o 00:04:10.698 CC module/vfu_device/vfu_virtio_scsi.o 00:04:10.698 CC module/vfu_device/vfu_virtio_rpc.o 00:04:10.698 CC module/vfu_device/vfu_virtio_fs.o 00:04:10.698 CC module/accel/ioat/accel_ioat.o 00:04:10.698 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:10.698 CC module/accel/ioat/accel_ioat_rpc.o 00:04:10.698 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:10.698 CC module/accel/iaa/accel_iaa.o 00:04:10.698 CC module/sock/posix/posix.o 00:04:10.698 CC module/keyring/linux/keyring.o 00:04:10.698 CC module/accel/iaa/accel_iaa_rpc.o 00:04:10.698 CC module/accel/dsa/accel_dsa.o 00:04:10.698 CC module/keyring/linux/keyring_rpc.o 00:04:10.698 CC module/keyring/file/keyring.o 00:04:10.698 CC module/accel/error/accel_error.o 00:04:10.698 CC module/scheduler/gscheduler/gscheduler.o 00:04:10.698 CC module/accel/dsa/accel_dsa_rpc.o 00:04:10.698 CC module/fsdev/aio/fsdev_aio.o 00:04:10.698 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:10.698 CC module/keyring/file/keyring_rpc.o 00:04:10.698 CC module/blob/bdev/blob_bdev.o 00:04:10.698 CC module/accel/error/accel_error_rpc.o 00:04:10.698 CC module/fsdev/aio/linux_aio_mgr.o 00:04:10.698 LIB libspdk_env_dpdk_rpc.a 00:04:10.698 SO libspdk_env_dpdk_rpc.so.6.0 00:04:10.957 SYMLINK libspdk_env_dpdk_rpc.so 00:04:10.957 LIB libspdk_keyring_linux.a 00:04:10.957 LIB libspdk_keyring_file.a 00:04:10.957 LIB libspdk_scheduler_gscheduler.a 00:04:10.957 LIB libspdk_scheduler_dpdk_governor.a 00:04:10.957 SO libspdk_keyring_linux.so.1.0 00:04:10.957 SO libspdk_keyring_file.so.2.0 00:04:10.957 SO libspdk_scheduler_gscheduler.so.4.0 00:04:10.957 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:10.957 LIB libspdk_accel_ioat.a 00:04:10.957 LIB libspdk_accel_error.a 00:04:10.957 LIB libspdk_accel_iaa.a 00:04:10.957 SO libspdk_accel_ioat.so.6.0 00:04:10.957 SYMLINK libspdk_scheduler_gscheduler.so 00:04:10.957 SYMLINK libspdk_keyring_file.so 00:04:10.957 SYMLINK libspdk_keyring_linux.so 00:04:10.957 SO libspdk_accel_error.so.2.0 00:04:10.957 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:10.957 SO libspdk_accel_iaa.so.3.0 00:04:10.957 SYMLINK libspdk_accel_ioat.so 00:04:10.957 SYMLINK libspdk_accel_error.so 00:04:10.957 LIB libspdk_blob_bdev.a 00:04:10.957 SYMLINK libspdk_accel_iaa.so 00:04:10.957 LIB libspdk_accel_dsa.a 00:04:10.957 LIB libspdk_scheduler_dynamic.a 00:04:10.957 SO libspdk_blob_bdev.so.11.0 00:04:10.957 SO libspdk_accel_dsa.so.5.0 00:04:10.957 SO libspdk_scheduler_dynamic.so.4.0 00:04:11.216 SYMLINK libspdk_blob_bdev.so 00:04:11.216 SYMLINK libspdk_scheduler_dynamic.so 00:04:11.216 SYMLINK libspdk_accel_dsa.so 00:04:11.479 CC module/blobfs/bdev/blobfs_bdev.o 00:04:11.479 CC module/bdev/null/bdev_null.o 00:04:11.479 CC module/bdev/malloc/bdev_malloc.o 00:04:11.479 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:11.479 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:11.479 CC module/bdev/null/bdev_null_rpc.o 00:04:11.479 CC module/bdev/gpt/gpt.o 00:04:11.479 CC module/bdev/gpt/vbdev_gpt.o 00:04:11.479 CC module/bdev/ftl/bdev_ftl.o 00:04:11.479 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:11.479 CC module/bdev/delay/vbdev_delay.o 00:04:11.479 CC module/bdev/split/vbdev_split.o 00:04:11.479 CC module/bdev/error/vbdev_error.o 00:04:11.479 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:11.479 CC module/bdev/passthru/vbdev_passthru.o 00:04:11.479 CC module/bdev/lvol/vbdev_lvol.o 00:04:11.479 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:11.479 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:11.479 CC module/bdev/split/vbdev_split_rpc.o 00:04:11.479 CC module/bdev/raid/bdev_raid.o 00:04:11.479 CC module/bdev/error/vbdev_error_rpc.o 00:04:11.479 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:11.479 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:11.479 CC module/bdev/raid/bdev_raid_rpc.o 00:04:11.479 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:11.479 CC module/bdev/iscsi/bdev_iscsi.o 00:04:11.479 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:11.479 CC module/bdev/raid/bdev_raid_sb.o 00:04:11.479 CC module/bdev/nvme/bdev_nvme.o 00:04:11.479 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:11.479 CC module/bdev/raid/raid0.o 00:04:11.479 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:11.479 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:11.479 CC module/bdev/raid/raid1.o 00:04:11.479 CC module/bdev/nvme/nvme_rpc.o 00:04:11.479 CC module/bdev/aio/bdev_aio.o 00:04:11.479 CC module/bdev/nvme/bdev_mdns_client.o 00:04:11.479 CC module/bdev/raid/concat.o 00:04:11.479 CC module/bdev/aio/bdev_aio_rpc.o 00:04:11.479 CC module/bdev/nvme/vbdev_opal.o 00:04:11.479 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:11.479 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:11.479 LIB libspdk_vfu_device.a 00:04:11.479 SO libspdk_vfu_device.so.3.0 00:04:11.479 LIB libspdk_fsdev_aio.a 00:04:11.479 SO libspdk_fsdev_aio.so.1.0 00:04:11.479 SYMLINK libspdk_vfu_device.so 00:04:11.479 SYMLINK libspdk_fsdev_aio.so 00:04:11.479 LIB libspdk_sock_posix.a 00:04:11.740 SO libspdk_sock_posix.so.6.0 00:04:11.740 SYMLINK libspdk_sock_posix.so 00:04:11.740 LIB libspdk_blobfs_bdev.a 00:04:11.740 SO libspdk_blobfs_bdev.so.6.0 00:04:11.740 LIB libspdk_bdev_gpt.a 00:04:11.740 SO libspdk_bdev_gpt.so.6.0 00:04:11.740 LIB libspdk_bdev_null.a 00:04:11.740 LIB libspdk_bdev_split.a 00:04:11.740 SYMLINK libspdk_blobfs_bdev.so 00:04:11.740 SO libspdk_bdev_null.so.6.0 00:04:11.740 LIB libspdk_bdev_error.a 00:04:11.740 SO libspdk_bdev_split.so.6.0 00:04:11.999 SYMLINK libspdk_bdev_gpt.so 00:04:11.999 LIB libspdk_bdev_ftl.a 00:04:11.999 SO libspdk_bdev_error.so.6.0 00:04:11.999 LIB libspdk_bdev_passthru.a 00:04:11.999 SO libspdk_bdev_ftl.so.6.0 00:04:11.999 SYMLINK libspdk_bdev_null.so 00:04:11.999 SO libspdk_bdev_passthru.so.6.0 00:04:11.999 SYMLINK libspdk_bdev_split.so 00:04:11.999 LIB libspdk_bdev_aio.a 00:04:11.999 SYMLINK libspdk_bdev_error.so 00:04:11.999 SYMLINK libspdk_bdev_ftl.so 00:04:11.999 SO libspdk_bdev_aio.so.6.0 00:04:11.999 LIB libspdk_bdev_zone_block.a 00:04:11.999 SYMLINK libspdk_bdev_passthru.so 00:04:11.999 LIB libspdk_bdev_iscsi.a 00:04:11.999 SO libspdk_bdev_zone_block.so.6.0 00:04:11.999 SO libspdk_bdev_iscsi.so.6.0 00:04:11.999 LIB libspdk_bdev_malloc.a 00:04:11.999 LIB libspdk_bdev_delay.a 00:04:11.999 SYMLINK libspdk_bdev_aio.so 00:04:11.999 SO libspdk_bdev_malloc.so.6.0 00:04:11.999 SO libspdk_bdev_delay.so.6.0 00:04:11.999 SYMLINK libspdk_bdev_zone_block.so 00:04:11.999 SYMLINK libspdk_bdev_iscsi.so 00:04:11.999 SYMLINK libspdk_bdev_delay.so 00:04:11.999 SYMLINK libspdk_bdev_malloc.so 00:04:11.999 LIB libspdk_bdev_lvol.a 00:04:11.999 LIB libspdk_bdev_virtio.a 00:04:12.258 SO libspdk_bdev_lvol.so.6.0 00:04:12.258 SO libspdk_bdev_virtio.so.6.0 00:04:12.258 SYMLINK libspdk_bdev_lvol.so 00:04:12.258 SYMLINK libspdk_bdev_virtio.so 00:04:12.517 LIB libspdk_bdev_raid.a 00:04:12.517 SO libspdk_bdev_raid.so.6.0 00:04:12.776 SYMLINK libspdk_bdev_raid.so 00:04:14.159 LIB libspdk_bdev_nvme.a 00:04:14.159 SO libspdk_bdev_nvme.so.7.1 00:04:14.159 SYMLINK libspdk_bdev_nvme.so 00:04:14.730 CC module/event/subsystems/keyring/keyring.o 00:04:14.730 CC module/event/subsystems/iobuf/iobuf.o 00:04:14.730 CC module/event/subsystems/vmd/vmd.o 00:04:14.730 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:14.730 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:14.730 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:14.730 CC module/event/subsystems/fsdev/fsdev.o 00:04:14.730 CC module/event/subsystems/scheduler/scheduler.o 00:04:14.730 CC module/event/subsystems/sock/sock.o 00:04:14.730 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:14.730 LIB libspdk_event_keyring.a 00:04:14.730 LIB libspdk_event_vhost_blk.a 00:04:14.730 LIB libspdk_event_fsdev.a 00:04:14.730 LIB libspdk_event_vmd.a 00:04:14.730 LIB libspdk_event_scheduler.a 00:04:14.730 LIB libspdk_event_vfu_tgt.a 00:04:14.730 LIB libspdk_event_sock.a 00:04:14.730 SO libspdk_event_keyring.so.1.0 00:04:14.730 LIB libspdk_event_iobuf.a 00:04:14.730 SO libspdk_event_vhost_blk.so.3.0 00:04:14.730 SO libspdk_event_fsdev.so.1.0 00:04:14.730 SO libspdk_event_scheduler.so.4.0 00:04:14.730 SO libspdk_event_vfu_tgt.so.3.0 00:04:14.730 SO libspdk_event_vmd.so.6.0 00:04:14.730 SO libspdk_event_sock.so.5.0 00:04:14.730 SO libspdk_event_iobuf.so.3.0 00:04:14.730 SYMLINK libspdk_event_keyring.so 00:04:14.730 SYMLINK libspdk_event_vhost_blk.so 00:04:14.730 SYMLINK libspdk_event_fsdev.so 00:04:14.730 SYMLINK libspdk_event_scheduler.so 00:04:14.730 SYMLINK libspdk_event_vfu_tgt.so 00:04:14.730 SYMLINK libspdk_event_sock.so 00:04:14.730 SYMLINK libspdk_event_vmd.so 00:04:14.730 SYMLINK libspdk_event_iobuf.so 00:04:14.989 CC module/event/subsystems/accel/accel.o 00:04:15.249 LIB libspdk_event_accel.a 00:04:15.249 SO libspdk_event_accel.so.6.0 00:04:15.249 SYMLINK libspdk_event_accel.so 00:04:15.508 CC module/event/subsystems/bdev/bdev.o 00:04:15.508 LIB libspdk_event_bdev.a 00:04:15.508 SO libspdk_event_bdev.so.6.0 00:04:15.766 SYMLINK libspdk_event_bdev.so 00:04:15.766 CC module/event/subsystems/scsi/scsi.o 00:04:15.767 CC module/event/subsystems/nbd/nbd.o 00:04:15.767 CC module/event/subsystems/ublk/ublk.o 00:04:15.767 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:15.767 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:16.025 LIB libspdk_event_nbd.a 00:04:16.025 LIB libspdk_event_ublk.a 00:04:16.025 LIB libspdk_event_scsi.a 00:04:16.025 SO libspdk_event_nbd.so.6.0 00:04:16.025 SO libspdk_event_ublk.so.3.0 00:04:16.025 SO libspdk_event_scsi.so.6.0 00:04:16.025 SYMLINK libspdk_event_nbd.so 00:04:16.025 SYMLINK libspdk_event_ublk.so 00:04:16.025 SYMLINK libspdk_event_scsi.so 00:04:16.025 LIB libspdk_event_nvmf.a 00:04:16.025 SO libspdk_event_nvmf.so.6.0 00:04:16.283 SYMLINK libspdk_event_nvmf.so 00:04:16.283 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:16.283 CC module/event/subsystems/iscsi/iscsi.o 00:04:16.283 LIB libspdk_event_vhost_scsi.a 00:04:16.283 LIB libspdk_event_iscsi.a 00:04:16.283 SO libspdk_event_vhost_scsi.so.3.0 00:04:16.541 SO libspdk_event_iscsi.so.6.0 00:04:16.541 SYMLINK libspdk_event_vhost_scsi.so 00:04:16.541 SYMLINK libspdk_event_iscsi.so 00:04:16.541 SO libspdk.so.6.0 00:04:16.541 SYMLINK libspdk.so 00:04:16.807 CC app/trace_record/trace_record.o 00:04:16.807 CC test/rpc_client/rpc_client_test.o 00:04:16.807 CC app/spdk_nvme_discover/discovery_aer.o 00:04:16.807 CC app/spdk_lspci/spdk_lspci.o 00:04:16.807 CC app/spdk_nvme_identify/identify.o 00:04:16.807 TEST_HEADER include/spdk/accel.h 00:04:16.807 TEST_HEADER include/spdk/accel_module.h 00:04:16.807 CXX app/trace/trace.o 00:04:16.807 TEST_HEADER include/spdk/assert.h 00:04:16.807 CC app/spdk_nvme_perf/perf.o 00:04:16.807 CC app/spdk_top/spdk_top.o 00:04:16.807 TEST_HEADER include/spdk/barrier.h 00:04:16.807 TEST_HEADER include/spdk/base64.h 00:04:16.807 TEST_HEADER include/spdk/bdev.h 00:04:16.807 TEST_HEADER include/spdk/bdev_module.h 00:04:16.807 TEST_HEADER include/spdk/bdev_zone.h 00:04:16.807 TEST_HEADER include/spdk/bit_array.h 00:04:16.807 TEST_HEADER include/spdk/bit_pool.h 00:04:16.807 TEST_HEADER include/spdk/blob_bdev.h 00:04:16.807 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:16.807 TEST_HEADER include/spdk/blobfs.h 00:04:16.807 TEST_HEADER include/spdk/blob.h 00:04:16.807 TEST_HEADER include/spdk/conf.h 00:04:16.807 TEST_HEADER include/spdk/config.h 00:04:16.807 TEST_HEADER include/spdk/cpuset.h 00:04:16.807 TEST_HEADER include/spdk/crc16.h 00:04:16.807 TEST_HEADER include/spdk/crc32.h 00:04:16.807 TEST_HEADER include/spdk/crc64.h 00:04:16.807 TEST_HEADER include/spdk/dif.h 00:04:16.807 TEST_HEADER include/spdk/dma.h 00:04:16.807 TEST_HEADER include/spdk/env_dpdk.h 00:04:16.807 TEST_HEADER include/spdk/endian.h 00:04:16.807 TEST_HEADER include/spdk/env.h 00:04:16.807 TEST_HEADER include/spdk/event.h 00:04:16.807 TEST_HEADER include/spdk/fd_group.h 00:04:16.807 TEST_HEADER include/spdk/file.h 00:04:16.807 TEST_HEADER include/spdk/fd.h 00:04:16.807 TEST_HEADER include/spdk/fsdev.h 00:04:16.807 TEST_HEADER include/spdk/fsdev_module.h 00:04:16.807 TEST_HEADER include/spdk/ftl.h 00:04:16.807 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:16.807 TEST_HEADER include/spdk/gpt_spec.h 00:04:16.807 TEST_HEADER include/spdk/hexlify.h 00:04:16.807 TEST_HEADER include/spdk/histogram_data.h 00:04:16.807 TEST_HEADER include/spdk/idxd.h 00:04:16.807 TEST_HEADER include/spdk/idxd_spec.h 00:04:16.807 TEST_HEADER include/spdk/init.h 00:04:16.807 TEST_HEADER include/spdk/ioat.h 00:04:16.807 TEST_HEADER include/spdk/ioat_spec.h 00:04:16.807 TEST_HEADER include/spdk/iscsi_spec.h 00:04:16.807 TEST_HEADER include/spdk/json.h 00:04:16.807 TEST_HEADER include/spdk/jsonrpc.h 00:04:16.807 TEST_HEADER include/spdk/keyring.h 00:04:16.807 TEST_HEADER include/spdk/keyring_module.h 00:04:16.807 TEST_HEADER include/spdk/likely.h 00:04:16.807 TEST_HEADER include/spdk/log.h 00:04:16.807 TEST_HEADER include/spdk/md5.h 00:04:16.807 TEST_HEADER include/spdk/lvol.h 00:04:16.807 TEST_HEADER include/spdk/memory.h 00:04:16.807 TEST_HEADER include/spdk/mmio.h 00:04:16.807 TEST_HEADER include/spdk/nbd.h 00:04:16.807 TEST_HEADER include/spdk/net.h 00:04:16.807 TEST_HEADER include/spdk/notify.h 00:04:16.807 TEST_HEADER include/spdk/nvme.h 00:04:16.807 TEST_HEADER include/spdk/nvme_intel.h 00:04:16.807 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:16.807 TEST_HEADER include/spdk/nvme_spec.h 00:04:16.807 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:16.807 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:16.807 TEST_HEADER include/spdk/nvme_zns.h 00:04:16.807 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:16.807 TEST_HEADER include/spdk/nvmf.h 00:04:16.807 TEST_HEADER include/spdk/nvmf_spec.h 00:04:16.807 TEST_HEADER include/spdk/nvmf_transport.h 00:04:16.807 TEST_HEADER include/spdk/opal.h 00:04:16.807 TEST_HEADER include/spdk/opal_spec.h 00:04:16.807 TEST_HEADER include/spdk/pci_ids.h 00:04:16.807 TEST_HEADER include/spdk/pipe.h 00:04:16.807 TEST_HEADER include/spdk/queue.h 00:04:16.807 TEST_HEADER include/spdk/reduce.h 00:04:16.807 TEST_HEADER include/spdk/rpc.h 00:04:16.807 TEST_HEADER include/spdk/scheduler.h 00:04:16.807 TEST_HEADER include/spdk/scsi.h 00:04:16.807 TEST_HEADER include/spdk/scsi_spec.h 00:04:16.807 TEST_HEADER include/spdk/sock.h 00:04:16.807 TEST_HEADER include/spdk/stdinc.h 00:04:16.807 TEST_HEADER include/spdk/string.h 00:04:16.807 TEST_HEADER include/spdk/thread.h 00:04:16.807 TEST_HEADER include/spdk/trace.h 00:04:16.807 TEST_HEADER include/spdk/trace_parser.h 00:04:16.807 TEST_HEADER include/spdk/tree.h 00:04:16.807 TEST_HEADER include/spdk/ublk.h 00:04:16.807 TEST_HEADER include/spdk/util.h 00:04:16.807 TEST_HEADER include/spdk/uuid.h 00:04:16.807 TEST_HEADER include/spdk/version.h 00:04:16.807 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:16.807 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:16.807 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:16.807 TEST_HEADER include/spdk/vhost.h 00:04:16.807 TEST_HEADER include/spdk/vmd.h 00:04:16.807 TEST_HEADER include/spdk/xor.h 00:04:16.807 TEST_HEADER include/spdk/zipf.h 00:04:16.807 CXX test/cpp_headers/accel.o 00:04:16.807 CXX test/cpp_headers/accel_module.o 00:04:16.807 CXX test/cpp_headers/assert.o 00:04:16.807 CXX test/cpp_headers/barrier.o 00:04:16.807 CXX test/cpp_headers/base64.o 00:04:16.807 CXX test/cpp_headers/bdev.o 00:04:16.807 CXX test/cpp_headers/bdev_module.o 00:04:16.807 CXX test/cpp_headers/bdev_zone.o 00:04:16.807 CXX test/cpp_headers/bit_array.o 00:04:16.807 CXX test/cpp_headers/bit_pool.o 00:04:16.807 CXX test/cpp_headers/blob_bdev.o 00:04:16.807 CXX test/cpp_headers/blobfs_bdev.o 00:04:16.807 CXX test/cpp_headers/blobfs.o 00:04:16.807 CXX test/cpp_headers/blob.o 00:04:16.807 CXX test/cpp_headers/conf.o 00:04:16.807 CXX test/cpp_headers/config.o 00:04:16.807 CXX test/cpp_headers/cpuset.o 00:04:16.807 CXX test/cpp_headers/crc16.o 00:04:16.807 CC app/spdk_dd/spdk_dd.o 00:04:16.807 CC app/iscsi_tgt/iscsi_tgt.o 00:04:16.807 CC app/nvmf_tgt/nvmf_main.o 00:04:16.807 CXX test/cpp_headers/crc32.o 00:04:16.807 CC test/thread/poller_perf/poller_perf.o 00:04:16.807 CC app/spdk_tgt/spdk_tgt.o 00:04:16.807 CC examples/ioat/perf/perf.o 00:04:16.807 CC examples/ioat/verify/verify.o 00:04:16.807 CC test/app/histogram_perf/histogram_perf.o 00:04:16.807 CC test/app/stub/stub.o 00:04:16.807 CC examples/util/zipf/zipf.o 00:04:17.068 CC test/app/jsoncat/jsoncat.o 00:04:17.068 CC test/env/memory/memory_ut.o 00:04:17.068 CC test/env/vtophys/vtophys.o 00:04:17.068 CC app/fio/nvme/fio_plugin.o 00:04:17.068 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:17.068 CC test/env/pci/pci_ut.o 00:04:17.068 CC test/dma/test_dma/test_dma.o 00:04:17.068 CC test/app/bdev_svc/bdev_svc.o 00:04:17.068 CC app/fio/bdev/fio_plugin.o 00:04:17.068 LINK spdk_lspci 00:04:17.068 CC test/env/mem_callbacks/mem_callbacks.o 00:04:17.068 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:17.068 LINK rpc_client_test 00:04:17.330 LINK spdk_nvme_discover 00:04:17.330 LINK jsoncat 00:04:17.330 LINK poller_perf 00:04:17.330 LINK spdk_trace_record 00:04:17.330 LINK histogram_perf 00:04:17.330 LINK interrupt_tgt 00:04:17.330 LINK zipf 00:04:17.330 CXX test/cpp_headers/crc64.o 00:04:17.330 LINK vtophys 00:04:17.330 CXX test/cpp_headers/dif.o 00:04:17.330 CXX test/cpp_headers/dma.o 00:04:17.330 CXX test/cpp_headers/endian.o 00:04:17.330 CXX test/cpp_headers/env_dpdk.o 00:04:17.330 CXX test/cpp_headers/env.o 00:04:17.330 CXX test/cpp_headers/event.o 00:04:17.330 LINK stub 00:04:17.330 LINK nvmf_tgt 00:04:17.330 CXX test/cpp_headers/fd_group.o 00:04:17.330 CXX test/cpp_headers/fd.o 00:04:17.330 LINK env_dpdk_post_init 00:04:17.330 CXX test/cpp_headers/file.o 00:04:17.330 CXX test/cpp_headers/fsdev.o 00:04:17.330 CXX test/cpp_headers/fsdev_module.o 00:04:17.330 LINK iscsi_tgt 00:04:17.330 CXX test/cpp_headers/ftl.o 00:04:17.330 CXX test/cpp_headers/fuse_dispatcher.o 00:04:17.330 LINK spdk_tgt 00:04:17.330 CXX test/cpp_headers/gpt_spec.o 00:04:17.330 CXX test/cpp_headers/hexlify.o 00:04:17.330 LINK verify 00:04:17.330 LINK bdev_svc 00:04:17.330 LINK ioat_perf 00:04:17.591 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:17.591 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:17.591 CXX test/cpp_headers/histogram_data.o 00:04:17.591 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:17.591 LINK mem_callbacks 00:04:17.591 CXX test/cpp_headers/idxd.o 00:04:17.591 CXX test/cpp_headers/idxd_spec.o 00:04:17.591 CXX test/cpp_headers/init.o 00:04:17.591 CXX test/cpp_headers/ioat.o 00:04:17.591 LINK spdk_dd 00:04:17.591 CXX test/cpp_headers/ioat_spec.o 00:04:17.591 CXX test/cpp_headers/iscsi_spec.o 00:04:17.857 CXX test/cpp_headers/json.o 00:04:17.857 CXX test/cpp_headers/jsonrpc.o 00:04:17.857 CXX test/cpp_headers/keyring.o 00:04:17.857 CXX test/cpp_headers/keyring_module.o 00:04:17.857 CXX test/cpp_headers/likely.o 00:04:17.857 CXX test/cpp_headers/log.o 00:04:17.857 CXX test/cpp_headers/lvol.o 00:04:17.857 LINK spdk_trace 00:04:17.857 CXX test/cpp_headers/md5.o 00:04:17.857 CXX test/cpp_headers/memory.o 00:04:17.857 CXX test/cpp_headers/mmio.o 00:04:17.857 CXX test/cpp_headers/nbd.o 00:04:17.857 LINK pci_ut 00:04:17.857 CXX test/cpp_headers/net.o 00:04:17.857 CXX test/cpp_headers/notify.o 00:04:17.857 CXX test/cpp_headers/nvme.o 00:04:17.857 CXX test/cpp_headers/nvme_intel.o 00:04:17.857 CXX test/cpp_headers/nvme_ocssd.o 00:04:17.857 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:17.857 CXX test/cpp_headers/nvme_spec.o 00:04:17.857 CXX test/cpp_headers/nvme_zns.o 00:04:17.857 CXX test/cpp_headers/nvmf_cmd.o 00:04:17.857 CC test/event/event_perf/event_perf.o 00:04:17.857 CC test/event/reactor/reactor.o 00:04:17.857 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:17.857 CC examples/sock/hello_world/hello_sock.o 00:04:17.857 CC test/event/reactor_perf/reactor_perf.o 00:04:17.857 CC examples/vmd/lsvmd/lsvmd.o 00:04:17.857 CXX test/cpp_headers/nvmf.o 00:04:17.857 CXX test/cpp_headers/nvmf_spec.o 00:04:17.857 CC examples/vmd/led/led.o 00:04:17.857 CXX test/cpp_headers/nvmf_transport.o 00:04:18.121 CC examples/thread/thread/thread_ex.o 00:04:18.121 CC examples/idxd/perf/perf.o 00:04:18.121 CC test/event/app_repeat/app_repeat.o 00:04:18.121 LINK nvme_fuzz 00:04:18.121 CC test/event/scheduler/scheduler.o 00:04:18.121 CXX test/cpp_headers/opal.o 00:04:18.121 LINK test_dma 00:04:18.121 CXX test/cpp_headers/opal_spec.o 00:04:18.121 CXX test/cpp_headers/pci_ids.o 00:04:18.121 CXX test/cpp_headers/pipe.o 00:04:18.121 CXX test/cpp_headers/queue.o 00:04:18.121 CXX test/cpp_headers/reduce.o 00:04:18.121 CXX test/cpp_headers/rpc.o 00:04:18.121 CXX test/cpp_headers/scheduler.o 00:04:18.121 CXX test/cpp_headers/scsi.o 00:04:18.121 CXX test/cpp_headers/scsi_spec.o 00:04:18.121 CXX test/cpp_headers/sock.o 00:04:18.121 CXX test/cpp_headers/stdinc.o 00:04:18.121 CXX test/cpp_headers/string.o 00:04:18.121 LINK reactor 00:04:18.121 CXX test/cpp_headers/thread.o 00:04:18.387 LINK spdk_bdev 00:04:18.387 CXX test/cpp_headers/trace.o 00:04:18.387 LINK event_perf 00:04:18.387 LINK lsvmd 00:04:18.387 CXX test/cpp_headers/trace_parser.o 00:04:18.387 LINK vhost_fuzz 00:04:18.387 LINK reactor_perf 00:04:18.387 CXX test/cpp_headers/tree.o 00:04:18.387 LINK led 00:04:18.387 CXX test/cpp_headers/ublk.o 00:04:18.387 CXX test/cpp_headers/util.o 00:04:18.387 CXX test/cpp_headers/uuid.o 00:04:18.387 CXX test/cpp_headers/version.o 00:04:18.387 CXX test/cpp_headers/vfio_user_pci.o 00:04:18.387 LINK spdk_nvme 00:04:18.387 CXX test/cpp_headers/vfio_user_spec.o 00:04:18.387 CXX test/cpp_headers/vhost.o 00:04:18.387 LINK app_repeat 00:04:18.387 LINK spdk_nvme_perf 00:04:18.387 CXX test/cpp_headers/vmd.o 00:04:18.387 CC app/vhost/vhost.o 00:04:18.387 CXX test/cpp_headers/xor.o 00:04:18.387 CXX test/cpp_headers/zipf.o 00:04:18.387 LINK spdk_nvme_identify 00:04:18.387 LINK hello_sock 00:04:18.387 LINK scheduler 00:04:18.647 LINK memory_ut 00:04:18.647 LINK thread 00:04:18.647 LINK spdk_top 00:04:18.647 LINK idxd_perf 00:04:18.647 CC test/nvme/compliance/nvme_compliance.o 00:04:18.647 CC test/nvme/aer/aer.o 00:04:18.647 CC test/nvme/simple_copy/simple_copy.o 00:04:18.647 CC test/nvme/startup/startup.o 00:04:18.647 CC test/nvme/reset/reset.o 00:04:18.647 CC test/nvme/overhead/overhead.o 00:04:18.647 CC test/nvme/boot_partition/boot_partition.o 00:04:18.647 CC test/nvme/fused_ordering/fused_ordering.o 00:04:18.647 CC test/nvme/connect_stress/connect_stress.o 00:04:18.647 CC test/nvme/cuse/cuse.o 00:04:18.647 CC test/nvme/reserve/reserve.o 00:04:18.647 CC test/nvme/fdp/fdp.o 00:04:18.647 CC test/nvme/e2edp/nvme_dp.o 00:04:18.647 CC test/nvme/err_injection/err_injection.o 00:04:18.647 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:18.647 CC test/nvme/sgl/sgl.o 00:04:18.906 LINK vhost 00:04:18.906 CC test/blobfs/mkfs/mkfs.o 00:04:18.906 CC test/accel/dif/dif.o 00:04:18.906 CC test/lvol/esnap/esnap.o 00:04:18.906 CC examples/nvme/hotplug/hotplug.o 00:04:18.906 CC examples/nvme/reconnect/reconnect.o 00:04:18.906 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:18.906 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:18.906 CC examples/nvme/hello_world/hello_world.o 00:04:18.906 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:18.906 CC examples/nvme/arbitration/arbitration.o 00:04:18.906 CC examples/nvme/abort/abort.o 00:04:18.906 LINK startup 00:04:18.906 LINK err_injection 00:04:18.906 LINK boot_partition 00:04:18.906 LINK doorbell_aers 00:04:18.906 LINK connect_stress 00:04:19.165 CC examples/accel/perf/accel_perf.o 00:04:19.165 LINK mkfs 00:04:19.165 CC examples/blob/cli/blobcli.o 00:04:19.165 LINK fused_ordering 00:04:19.165 CC examples/blob/hello_world/hello_blob.o 00:04:19.165 LINK sgl 00:04:19.165 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:19.165 LINK nvme_dp 00:04:19.165 LINK reserve 00:04:19.165 LINK simple_copy 00:04:19.165 LINK overhead 00:04:19.165 LINK aer 00:04:19.165 LINK fdp 00:04:19.165 LINK reset 00:04:19.165 LINK pmr_persistence 00:04:19.165 LINK hello_world 00:04:19.424 LINK hotplug 00:04:19.424 LINK cmb_copy 00:04:19.424 LINK nvme_compliance 00:04:19.424 LINK reconnect 00:04:19.424 LINK abort 00:04:19.424 LINK hello_blob 00:04:19.424 LINK arbitration 00:04:19.424 LINK hello_fsdev 00:04:19.683 LINK nvme_manage 00:04:19.683 LINK dif 00:04:19.683 LINK blobcli 00:04:19.683 LINK accel_perf 00:04:19.943 LINK iscsi_fuzz 00:04:19.943 CC test/bdev/bdevio/bdevio.o 00:04:19.943 CC examples/bdev/hello_world/hello_bdev.o 00:04:19.943 CC examples/bdev/bdevperf/bdevperf.o 00:04:20.202 LINK hello_bdev 00:04:20.461 LINK bdevio 00:04:20.461 LINK cuse 00:04:21.029 LINK bdevperf 00:04:21.287 CC examples/nvmf/nvmf/nvmf.o 00:04:21.546 LINK nvmf 00:04:24.080 LINK esnap 00:04:24.339 00:04:24.339 real 1m7.994s 00:04:24.339 user 9m3.756s 00:04:24.339 sys 1m59.719s 00:04:24.339 02:45:34 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:24.339 02:45:34 make -- common/autotest_common.sh@10 -- $ set +x 00:04:24.339 ************************************ 00:04:24.339 END TEST make 00:04:24.339 ************************************ 00:04:24.339 02:45:34 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:24.339 02:45:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:24.339 02:45:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:24.339 02:45:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.339 02:45:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:24.339 02:45:34 -- pm/common@44 -- $ pid=6121 00:04:24.339 02:45:34 -- pm/common@50 -- $ kill -TERM 6121 00:04:24.339 02:45:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.339 02:45:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:24.339 02:45:34 -- pm/common@44 -- $ pid=6123 00:04:24.339 02:45:34 -- pm/common@50 -- $ kill -TERM 6123 00:04:24.339 02:45:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.339 02:45:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:24.339 02:45:34 -- pm/common@44 -- $ pid=6125 00:04:24.339 02:45:34 -- pm/common@50 -- $ kill -TERM 6125 00:04:24.339 02:45:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.339 02:45:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:24.339 02:45:34 -- pm/common@44 -- $ pid=6156 00:04:24.339 02:45:34 -- pm/common@50 -- $ sudo -E kill -TERM 6156 00:04:24.339 02:45:34 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:24.339 02:45:34 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:24.600 02:45:34 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:24.600 02:45:34 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:24.600 02:45:34 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:24.600 02:45:35 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:24.600 02:45:35 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.600 02:45:35 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.600 02:45:35 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.600 02:45:35 -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.600 02:45:35 -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.600 02:45:35 -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.600 02:45:35 -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.600 02:45:35 -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.600 02:45:35 -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.600 02:45:35 -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.600 02:45:35 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.600 02:45:35 -- scripts/common.sh@344 -- # case "$op" in 00:04:24.600 02:45:35 -- scripts/common.sh@345 -- # : 1 00:04:24.600 02:45:35 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.600 02:45:35 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.600 02:45:35 -- scripts/common.sh@365 -- # decimal 1 00:04:24.600 02:45:35 -- scripts/common.sh@353 -- # local d=1 00:04:24.600 02:45:35 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.600 02:45:35 -- scripts/common.sh@355 -- # echo 1 00:04:24.600 02:45:35 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.600 02:45:35 -- scripts/common.sh@366 -- # decimal 2 00:04:24.600 02:45:35 -- scripts/common.sh@353 -- # local d=2 00:04:24.600 02:45:35 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.600 02:45:35 -- scripts/common.sh@355 -- # echo 2 00:04:24.600 02:45:35 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.600 02:45:35 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.600 02:45:35 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.600 02:45:35 -- scripts/common.sh@368 -- # return 0 00:04:24.600 02:45:35 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.600 02:45:35 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:24.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.600 --rc genhtml_branch_coverage=1 00:04:24.600 --rc genhtml_function_coverage=1 00:04:24.600 --rc genhtml_legend=1 00:04:24.600 --rc geninfo_all_blocks=1 00:04:24.600 --rc geninfo_unexecuted_blocks=1 00:04:24.600 00:04:24.600 ' 00:04:24.600 02:45:35 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:24.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.600 --rc genhtml_branch_coverage=1 00:04:24.600 --rc genhtml_function_coverage=1 00:04:24.600 --rc genhtml_legend=1 00:04:24.600 --rc geninfo_all_blocks=1 00:04:24.600 --rc geninfo_unexecuted_blocks=1 00:04:24.600 00:04:24.600 ' 00:04:24.600 02:45:35 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:24.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.600 --rc genhtml_branch_coverage=1 00:04:24.600 --rc genhtml_function_coverage=1 00:04:24.600 --rc genhtml_legend=1 00:04:24.600 --rc geninfo_all_blocks=1 00:04:24.600 --rc geninfo_unexecuted_blocks=1 00:04:24.600 00:04:24.600 ' 00:04:24.600 02:45:35 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:24.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.600 --rc genhtml_branch_coverage=1 00:04:24.600 --rc genhtml_function_coverage=1 00:04:24.600 --rc genhtml_legend=1 00:04:24.600 --rc geninfo_all_blocks=1 00:04:24.600 --rc geninfo_unexecuted_blocks=1 00:04:24.600 00:04:24.600 ' 00:04:24.600 02:45:35 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:24.600 02:45:35 -- nvmf/common.sh@7 -- # uname -s 00:04:24.600 02:45:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.600 02:45:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.600 02:45:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.600 02:45:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.600 02:45:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.600 02:45:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.600 02:45:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.600 02:45:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.600 02:45:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.600 02:45:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.600 02:45:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:24.600 02:45:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:24.600 02:45:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.600 02:45:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.600 02:45:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:24.600 02:45:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.600 02:45:35 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:24.601 02:45:35 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:24.601 02:45:35 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.601 02:45:35 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.601 02:45:35 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.601 02:45:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.601 02:45:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.601 02:45:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.601 02:45:35 -- paths/export.sh@5 -- # export PATH 00:04:24.601 02:45:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.601 02:45:35 -- nvmf/common.sh@51 -- # : 0 00:04:24.601 02:45:35 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:24.601 02:45:35 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:24.601 02:45:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.601 02:45:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.601 02:45:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.601 02:45:35 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:24.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:24.601 02:45:35 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:24.601 02:45:35 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:24.601 02:45:35 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:24.601 02:45:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:24.601 02:45:35 -- spdk/autotest.sh@32 -- # uname -s 00:04:24.601 02:45:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:24.601 02:45:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:24.601 02:45:35 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:24.601 02:45:35 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:24.601 02:45:35 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:24.601 02:45:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:24.601 02:45:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:24.601 02:45:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:24.601 02:45:35 -- spdk/autotest.sh@48 -- # udevadm_pid=87163 00:04:24.601 02:45:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:24.601 02:45:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:24.601 02:45:35 -- pm/common@17 -- # local monitor 00:04:24.601 02:45:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.601 02:45:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.601 02:45:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.601 02:45:35 -- pm/common@21 -- # date +%s 00:04:24.601 02:45:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.601 02:45:35 -- pm/common@21 -- # date +%s 00:04:24.601 02:45:35 -- pm/common@25 -- # sleep 1 00:04:24.601 02:45:35 -- pm/common@21 -- # date +%s 00:04:24.601 02:45:35 -- pm/common@21 -- # date +%s 00:04:24.601 02:45:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731980735 00:04:24.601 02:45:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731980735 00:04:24.601 02:45:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731980735 00:04:24.601 02:45:35 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731980735 00:04:24.601 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731980735_collect-vmstat.pm.log 00:04:24.601 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731980735_collect-cpu-load.pm.log 00:04:24.601 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731980735_collect-cpu-temp.pm.log 00:04:24.601 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731980735_collect-bmc-pm.bmc.pm.log 00:04:25.983 02:45:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:25.983 02:45:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:25.983 02:45:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.983 02:45:36 -- common/autotest_common.sh@10 -- # set +x 00:04:25.983 02:45:36 -- spdk/autotest.sh@59 -- # create_test_list 00:04:25.983 02:45:36 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:25.983 02:45:36 -- common/autotest_common.sh@10 -- # set +x 00:04:25.983 02:45:36 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:25.983 02:45:36 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:25.983 02:45:36 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:25.983 02:45:36 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:25.983 02:45:36 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:25.983 02:45:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:25.983 02:45:36 -- common/autotest_common.sh@1457 -- # uname 00:04:25.983 02:45:36 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:25.983 02:45:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:25.983 02:45:36 -- common/autotest_common.sh@1477 -- # uname 00:04:25.983 02:45:36 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:25.983 02:45:36 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:25.983 02:45:36 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:25.983 lcov: LCOV version 1.15 00:04:25.983 02:45:36 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:58.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:58.066 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:03.336 02:46:13 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:03.336 02:46:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.336 02:46:13 -- common/autotest_common.sh@10 -- # set +x 00:05:03.336 02:46:13 -- spdk/autotest.sh@78 -- # rm -f 00:05:03.336 02:46:13 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.718 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:04.718 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:04.718 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:04.718 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:04.718 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:04.718 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:04.718 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:04.718 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:04.718 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:04.718 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:04.718 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:04.718 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:04.718 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:04.718 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:04.718 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:04.718 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:04.718 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:04.718 02:46:15 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:04.718 02:46:15 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:04.718 02:46:15 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:04.718 02:46:15 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:04.718 02:46:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:04.718 02:46:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:04.718 02:46:15 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:04.718 02:46:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:04.718 02:46:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:04.718 02:46:15 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:04.718 02:46:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.718 02:46:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.718 02:46:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:04.718 02:46:15 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:04.718 02:46:15 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:04.718 No valid GPT data, bailing 00:05:04.718 02:46:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:04.718 02:46:15 -- scripts/common.sh@394 -- # pt= 00:05:04.718 02:46:15 -- scripts/common.sh@395 -- # return 1 00:05:04.718 02:46:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:04.718 1+0 records in 00:05:04.718 1+0 records out 00:05:04.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00232815 s, 450 MB/s 00:05:04.718 02:46:15 -- spdk/autotest.sh@105 -- # sync 00:05:04.718 02:46:15 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:04.718 02:46:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:04.718 02:46:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:07.258 02:46:17 -- spdk/autotest.sh@111 -- # uname -s 00:05:07.258 02:46:17 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:07.258 02:46:17 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:07.258 02:46:17 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:08.199 Hugepages 00:05:08.199 node hugesize free / total 00:05:08.199 node0 1048576kB 0 / 0 00:05:08.199 node0 2048kB 0 / 0 00:05:08.199 node1 1048576kB 0 / 0 00:05:08.199 node1 2048kB 0 / 0 00:05:08.199 00:05:08.199 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:08.199 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:08.199 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:08.199 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:08.199 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:08.199 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:08.199 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:08.199 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:08.199 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:08.199 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:08.199 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:08.199 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:08.199 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:08.199 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:08.199 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:08.199 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:08.199 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:08.458 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:08.458 02:46:18 -- spdk/autotest.sh@117 -- # uname -s 00:05:08.458 02:46:18 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:08.458 02:46:18 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:08.458 02:46:18 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:09.394 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:09.394 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:09.655 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:09.655 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:09.655 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:09.655 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:09.655 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:09.655 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:09.655 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:09.655 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:09.655 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:09.655 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:09.655 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:09.655 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:09.655 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:09.655 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:10.598 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:10.598 02:46:21 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:11.980 02:46:22 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:11.980 02:46:22 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:11.981 02:46:22 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:11.981 02:46:22 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:11.981 02:46:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:11.981 02:46:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:11.981 02:46:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:11.981 02:46:22 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:11.981 02:46:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:11.981 02:46:22 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:11.981 02:46:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:11.981 02:46:22 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:12.920 Waiting for block devices as requested 00:05:12.920 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:13.182 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:13.182 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:13.182 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:13.442 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:13.442 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:13.442 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:13.442 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:13.701 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:13.701 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:13.701 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:13.701 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:13.962 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:13.962 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:13.962 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:13.962 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:14.222 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:14.222 02:46:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:14.222 02:46:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:14.222 02:46:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:14.222 02:46:24 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:05:14.222 02:46:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:14.222 02:46:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:14.222 02:46:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:14.222 02:46:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:14.222 02:46:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:14.222 02:46:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:14.222 02:46:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:14.222 02:46:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:14.222 02:46:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:14.222 02:46:24 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:05:14.222 02:46:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:14.222 02:46:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:14.222 02:46:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:14.222 02:46:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:14.222 02:46:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:14.222 02:46:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:14.222 02:46:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:14.222 02:46:24 -- common/autotest_common.sh@1543 -- # continue 00:05:14.222 02:46:24 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:14.222 02:46:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:14.222 02:46:24 -- common/autotest_common.sh@10 -- # set +x 00:05:14.222 02:46:24 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:14.222 02:46:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.222 02:46:24 -- common/autotest_common.sh@10 -- # set +x 00:05:14.222 02:46:24 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:15.601 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:15.601 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:15.601 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:15.601 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:15.601 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:15.601 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:15.601 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:15.601 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:15.601 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:15.601 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:15.601 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:15.601 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:15.601 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:15.601 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:15.859 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:15.859 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:16.800 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:16.800 02:46:27 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:16.800 02:46:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.800 02:46:27 -- common/autotest_common.sh@10 -- # set +x 00:05:16.800 02:46:27 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:16.800 02:46:27 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:16.800 02:46:27 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:16.800 02:46:27 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:16.800 02:46:27 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:16.800 02:46:27 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:16.800 02:46:27 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:16.800 02:46:27 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:16.800 02:46:27 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:16.800 02:46:27 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:16.800 02:46:27 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:16.800 02:46:27 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:16.800 02:46:27 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:16.800 02:46:27 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:16.800 02:46:27 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:16.800 02:46:27 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:16.800 02:46:27 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:16.800 02:46:27 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:16.800 02:46:27 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:16.800 02:46:27 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:16.800 02:46:27 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:16.800 02:46:27 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:05:16.800 02:46:27 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:05:16.800 02:46:27 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=97918 00:05:16.800 02:46:27 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.800 02:46:27 -- common/autotest_common.sh@1585 -- # waitforlisten 97918 00:05:16.800 02:46:27 -- common/autotest_common.sh@835 -- # '[' -z 97918 ']' 00:05:16.800 02:46:27 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.800 02:46:27 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.800 02:46:27 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.800 02:46:27 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.800 02:46:27 -- common/autotest_common.sh@10 -- # set +x 00:05:17.060 [2024-11-19 02:46:27.427262] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:17.060 [2024-11-19 02:46:27.427353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97918 ] 00:05:17.060 [2024-11-19 02:46:27.492917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.060 [2024-11-19 02:46:27.539160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.319 02:46:27 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.319 02:46:27 -- common/autotest_common.sh@868 -- # return 0 00:05:17.319 02:46:27 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:17.319 02:46:27 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:17.319 02:46:27 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:20.614 nvme0n1 00:05:20.614 02:46:30 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:20.614 [2024-11-19 02:46:31.134685] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:20.614 [2024-11-19 02:46:31.134734] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:20.614 request: 00:05:20.614 { 00:05:20.614 "nvme_ctrlr_name": "nvme0", 00:05:20.614 "password": "test", 00:05:20.614 "method": "bdev_nvme_opal_revert", 00:05:20.614 "req_id": 1 00:05:20.614 } 00:05:20.614 Got JSON-RPC error response 00:05:20.614 response: 00:05:20.614 { 00:05:20.614 "code": -32603, 00:05:20.614 "message": "Internal error" 00:05:20.614 } 00:05:20.614 02:46:31 -- common/autotest_common.sh@1591 -- # true 00:05:20.614 02:46:31 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:20.614 02:46:31 -- common/autotest_common.sh@1595 -- # killprocess 97918 00:05:20.614 02:46:31 -- common/autotest_common.sh@954 -- # '[' -z 97918 ']' 00:05:20.614 02:46:31 -- common/autotest_common.sh@958 -- # kill -0 97918 00:05:20.614 02:46:31 -- common/autotest_common.sh@959 -- # uname 00:05:20.614 02:46:31 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.614 02:46:31 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97918 00:05:20.614 02:46:31 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.614 02:46:31 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.614 02:46:31 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97918' 00:05:20.614 killing process with pid 97918 00:05:20.614 02:46:31 -- common/autotest_common.sh@973 -- # kill 97918 00:05:20.615 02:46:31 -- common/autotest_common.sh@978 -- # wait 97918 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.874 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:20.875 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:22.779 02:46:32 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:22.779 02:46:32 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:22.779 02:46:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:22.779 02:46:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:22.779 02:46:32 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:22.779 02:46:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:22.779 02:46:32 -- common/autotest_common.sh@10 -- # set +x 00:05:22.779 02:46:32 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:22.779 02:46:32 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:22.779 02:46:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.779 02:46:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.779 02:46:32 -- common/autotest_common.sh@10 -- # set +x 00:05:22.779 ************************************ 00:05:22.779 START TEST env 00:05:22.779 ************************************ 00:05:22.779 02:46:32 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:22.779 * Looking for test storage... 00:05:22.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:22.779 02:46:33 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:22.779 02:46:33 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:22.779 02:46:33 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:22.779 02:46:33 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:22.779 02:46:33 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.779 02:46:33 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.779 02:46:33 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.779 02:46:33 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.779 02:46:33 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.779 02:46:33 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.779 02:46:33 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.779 02:46:33 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.779 02:46:33 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.779 02:46:33 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.779 02:46:33 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.779 02:46:33 env -- scripts/common.sh@344 -- # case "$op" in 00:05:22.779 02:46:33 env -- scripts/common.sh@345 -- # : 1 00:05:22.779 02:46:33 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.779 02:46:33 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.779 02:46:33 env -- scripts/common.sh@365 -- # decimal 1 00:05:22.779 02:46:33 env -- scripts/common.sh@353 -- # local d=1 00:05:22.779 02:46:33 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.779 02:46:33 env -- scripts/common.sh@355 -- # echo 1 00:05:22.779 02:46:33 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.779 02:46:33 env -- scripts/common.sh@366 -- # decimal 2 00:05:22.779 02:46:33 env -- scripts/common.sh@353 -- # local d=2 00:05:22.779 02:46:33 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.779 02:46:33 env -- scripts/common.sh@355 -- # echo 2 00:05:22.779 02:46:33 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.779 02:46:33 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.779 02:46:33 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.779 02:46:33 env -- scripts/common.sh@368 -- # return 0 00:05:22.779 02:46:33 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.779 02:46:33 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:22.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.779 --rc genhtml_branch_coverage=1 00:05:22.779 --rc genhtml_function_coverage=1 00:05:22.779 --rc genhtml_legend=1 00:05:22.779 --rc geninfo_all_blocks=1 00:05:22.779 --rc geninfo_unexecuted_blocks=1 00:05:22.779 00:05:22.779 ' 00:05:22.779 02:46:33 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:22.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.779 --rc genhtml_branch_coverage=1 00:05:22.779 --rc genhtml_function_coverage=1 00:05:22.779 --rc genhtml_legend=1 00:05:22.779 --rc geninfo_all_blocks=1 00:05:22.779 --rc geninfo_unexecuted_blocks=1 00:05:22.779 00:05:22.779 ' 00:05:22.779 02:46:33 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:22.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.779 --rc genhtml_branch_coverage=1 00:05:22.779 --rc genhtml_function_coverage=1 00:05:22.779 --rc genhtml_legend=1 00:05:22.779 --rc geninfo_all_blocks=1 00:05:22.779 --rc geninfo_unexecuted_blocks=1 00:05:22.779 00:05:22.779 ' 00:05:22.779 02:46:33 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:22.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.779 --rc genhtml_branch_coverage=1 00:05:22.779 --rc genhtml_function_coverage=1 00:05:22.779 --rc genhtml_legend=1 00:05:22.780 --rc geninfo_all_blocks=1 00:05:22.780 --rc geninfo_unexecuted_blocks=1 00:05:22.780 00:05:22.780 ' 00:05:22.780 02:46:33 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:22.780 02:46:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.780 02:46:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.780 02:46:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.780 ************************************ 00:05:22.780 START TEST env_memory 00:05:22.780 ************************************ 00:05:22.780 02:46:33 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:22.780 00:05:22.780 00:05:22.780 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.780 http://cunit.sourceforge.net/ 00:05:22.780 00:05:22.780 00:05:22.780 Suite: memory 00:05:22.780 Test: alloc and free memory map ...[2024-11-19 02:46:33.156033] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:22.780 passed 00:05:22.780 Test: mem map translation ...[2024-11-19 02:46:33.175788] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:22.780 [2024-11-19 02:46:33.175809] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:22.780 [2024-11-19 02:46:33.175865] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:22.780 [2024-11-19 02:46:33.175876] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:22.780 passed 00:05:22.780 Test: mem map registration ...[2024-11-19 02:46:33.217052] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:22.780 [2024-11-19 02:46:33.217072] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:22.780 passed 00:05:22.780 Test: mem map adjacent registrations ...passed 00:05:22.780 00:05:22.780 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.780 suites 1 1 n/a 0 0 00:05:22.780 tests 4 4 4 0 0 00:05:22.780 asserts 152 152 152 0 n/a 00:05:22.780 00:05:22.780 Elapsed time = 0.142 seconds 00:05:22.780 00:05:22.780 real 0m0.152s 00:05:22.780 user 0m0.147s 00:05:22.780 sys 0m0.004s 00:05:22.780 02:46:33 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.780 02:46:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:22.780 ************************************ 00:05:22.780 END TEST env_memory 00:05:22.780 ************************************ 00:05:22.780 02:46:33 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:22.780 02:46:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.780 02:46:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.780 02:46:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.780 ************************************ 00:05:22.780 START TEST env_vtophys 00:05:22.780 ************************************ 00:05:22.780 02:46:33 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:22.780 EAL: lib.eal log level changed from notice to debug 00:05:22.780 EAL: Detected lcore 0 as core 0 on socket 0 00:05:22.780 EAL: Detected lcore 1 as core 1 on socket 0 00:05:22.780 EAL: Detected lcore 2 as core 2 on socket 0 00:05:22.780 EAL: Detected lcore 3 as core 3 on socket 0 00:05:22.780 EAL: Detected lcore 4 as core 4 on socket 0 00:05:22.780 EAL: Detected lcore 5 as core 5 on socket 0 00:05:22.780 EAL: Detected lcore 6 as core 8 on socket 0 00:05:22.780 EAL: Detected lcore 7 as core 9 on socket 0 00:05:22.780 EAL: Detected lcore 8 as core 10 on socket 0 00:05:22.780 EAL: Detected lcore 9 as core 11 on socket 0 00:05:22.780 EAL: Detected lcore 10 as core 12 on socket 0 00:05:22.780 EAL: Detected lcore 11 as core 13 on socket 0 00:05:22.780 EAL: Detected lcore 12 as core 0 on socket 1 00:05:22.780 EAL: Detected lcore 13 as core 1 on socket 1 00:05:22.780 EAL: Detected lcore 14 as core 2 on socket 1 00:05:22.780 EAL: Detected lcore 15 as core 3 on socket 1 00:05:22.780 EAL: Detected lcore 16 as core 4 on socket 1 00:05:22.780 EAL: Detected lcore 17 as core 5 on socket 1 00:05:22.780 EAL: Detected lcore 18 as core 8 on socket 1 00:05:22.780 EAL: Detected lcore 19 as core 9 on socket 1 00:05:22.780 EAL: Detected lcore 20 as core 10 on socket 1 00:05:22.780 EAL: Detected lcore 21 as core 11 on socket 1 00:05:22.780 EAL: Detected lcore 22 as core 12 on socket 1 00:05:22.780 EAL: Detected lcore 23 as core 13 on socket 1 00:05:22.780 EAL: Detected lcore 24 as core 0 on socket 0 00:05:22.780 EAL: Detected lcore 25 as core 1 on socket 0 00:05:22.780 EAL: Detected lcore 26 as core 2 on socket 0 00:05:22.780 EAL: Detected lcore 27 as core 3 on socket 0 00:05:22.780 EAL: Detected lcore 28 as core 4 on socket 0 00:05:22.780 EAL: Detected lcore 29 as core 5 on socket 0 00:05:22.780 EAL: Detected lcore 30 as core 8 on socket 0 00:05:22.780 EAL: Detected lcore 31 as core 9 on socket 0 00:05:22.780 EAL: Detected lcore 32 as core 10 on socket 0 00:05:22.780 EAL: Detected lcore 33 as core 11 on socket 0 00:05:22.780 EAL: Detected lcore 34 as core 12 on socket 0 00:05:22.780 EAL: Detected lcore 35 as core 13 on socket 0 00:05:22.780 EAL: Detected lcore 36 as core 0 on socket 1 00:05:22.780 EAL: Detected lcore 37 as core 1 on socket 1 00:05:22.780 EAL: Detected lcore 38 as core 2 on socket 1 00:05:22.780 EAL: Detected lcore 39 as core 3 on socket 1 00:05:22.780 EAL: Detected lcore 40 as core 4 on socket 1 00:05:22.780 EAL: Detected lcore 41 as core 5 on socket 1 00:05:22.780 EAL: Detected lcore 42 as core 8 on socket 1 00:05:22.780 EAL: Detected lcore 43 as core 9 on socket 1 00:05:22.780 EAL: Detected lcore 44 as core 10 on socket 1 00:05:22.780 EAL: Detected lcore 45 as core 11 on socket 1 00:05:22.780 EAL: Detected lcore 46 as core 12 on socket 1 00:05:22.780 EAL: Detected lcore 47 as core 13 on socket 1 00:05:22.780 EAL: Maximum logical cores by configuration: 128 00:05:22.780 EAL: Detected CPU lcores: 48 00:05:22.780 EAL: Detected NUMA nodes: 2 00:05:22.780 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:22.780 EAL: Detected shared linkage of DPDK 00:05:22.780 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:22.780 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:22.780 EAL: Registered [vdev] bus. 00:05:22.780 EAL: bus.vdev log level changed from disabled to notice 00:05:22.780 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:22.780 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:22.780 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:22.780 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:22.780 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:22.780 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:22.780 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:22.780 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:22.780 EAL: No shared files mode enabled, IPC will be disabled 00:05:22.780 EAL: No shared files mode enabled, IPC is disabled 00:05:22.780 EAL: Bus pci wants IOVA as 'DC' 00:05:22.780 EAL: Bus vdev wants IOVA as 'DC' 00:05:22.780 EAL: Buses did not request a specific IOVA mode. 00:05:22.780 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:22.780 EAL: Selected IOVA mode 'VA' 00:05:22.780 EAL: Probing VFIO support... 00:05:22.780 EAL: IOMMU type 1 (Type 1) is supported 00:05:22.780 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:22.780 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:22.780 EAL: VFIO support initialized 00:05:22.780 EAL: Ask a virtual area of 0x2e000 bytes 00:05:22.780 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:22.780 EAL: Setting up physically contiguous memory... 00:05:22.780 EAL: Setting maximum number of open files to 524288 00:05:22.780 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:22.780 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:22.780 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:22.780 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.780 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:22.780 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.780 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.780 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:22.780 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:22.780 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.780 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:22.780 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.780 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.780 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:22.780 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:22.780 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.780 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:22.780 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.780 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.780 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:22.780 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:22.780 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.780 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:22.780 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.780 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.780 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:22.780 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:22.780 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:22.780 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.780 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:22.780 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.781 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.781 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:22.781 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:22.781 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.781 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:22.781 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.781 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.781 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:22.781 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:22.781 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.781 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:22.781 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.781 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.781 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:22.781 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:22.781 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.781 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:22.781 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.781 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.781 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:22.781 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:22.781 EAL: Hugepages will be freed exactly as allocated. 00:05:22.781 EAL: No shared files mode enabled, IPC is disabled 00:05:22.781 EAL: No shared files mode enabled, IPC is disabled 00:05:22.781 EAL: TSC frequency is ~2700000 KHz 00:05:22.781 EAL: Main lcore 0 is ready (tid=7f36914c0a00;cpuset=[0]) 00:05:22.781 EAL: Trying to obtain current memory policy. 00:05:22.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.781 EAL: Restoring previous memory policy: 0 00:05:22.781 EAL: request: mp_malloc_sync 00:05:22.781 EAL: No shared files mode enabled, IPC is disabled 00:05:22.781 EAL: Heap on socket 0 was expanded by 2MB 00:05:22.781 EAL: No shared files mode enabled, IPC is disabled 00:05:22.781 EAL: No shared files mode enabled, IPC is disabled 00:05:22.781 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:22.781 EAL: Mem event callback 'spdk:(nil)' registered 00:05:23.040 00:05:23.040 00:05:23.040 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.040 http://cunit.sourceforge.net/ 00:05:23.040 00:05:23.040 00:05:23.040 Suite: components_suite 00:05:23.040 Test: vtophys_malloc_test ...passed 00:05:23.040 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:23.040 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.040 EAL: Restoring previous memory policy: 4 00:05:23.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.040 EAL: request: mp_malloc_sync 00:05:23.040 EAL: No shared files mode enabled, IPC is disabled 00:05:23.040 EAL: Heap on socket 0 was expanded by 4MB 00:05:23.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.040 EAL: request: mp_malloc_sync 00:05:23.040 EAL: No shared files mode enabled, IPC is disabled 00:05:23.040 EAL: Heap on socket 0 was shrunk by 4MB 00:05:23.040 EAL: Trying to obtain current memory policy. 00:05:23.040 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.040 EAL: Restoring previous memory policy: 4 00:05:23.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.040 EAL: request: mp_malloc_sync 00:05:23.040 EAL: No shared files mode enabled, IPC is disabled 00:05:23.040 EAL: Heap on socket 0 was expanded by 6MB 00:05:23.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.040 EAL: request: mp_malloc_sync 00:05:23.040 EAL: No shared files mode enabled, IPC is disabled 00:05:23.040 EAL: Heap on socket 0 was shrunk by 6MB 00:05:23.040 EAL: Trying to obtain current memory policy. 00:05:23.040 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.040 EAL: Restoring previous memory policy: 4 00:05:23.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.040 EAL: request: mp_malloc_sync 00:05:23.040 EAL: No shared files mode enabled, IPC is disabled 00:05:23.040 EAL: Heap on socket 0 was expanded by 10MB 00:05:23.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.040 EAL: request: mp_malloc_sync 00:05:23.040 EAL: No shared files mode enabled, IPC is disabled 00:05:23.040 EAL: Heap on socket 0 was shrunk by 10MB 00:05:23.040 EAL: Trying to obtain current memory policy. 00:05:23.040 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.040 EAL: Restoring previous memory policy: 4 00:05:23.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.040 EAL: request: mp_malloc_sync 00:05:23.040 EAL: No shared files mode enabled, IPC is disabled 00:05:23.040 EAL: Heap on socket 0 was expanded by 18MB 00:05:23.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.040 EAL: request: mp_malloc_sync 00:05:23.040 EAL: No shared files mode enabled, IPC is disabled 00:05:23.040 EAL: Heap on socket 0 was shrunk by 18MB 00:05:23.040 EAL: Trying to obtain current memory policy. 00:05:23.040 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.040 EAL: Restoring previous memory policy: 4 00:05:23.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.040 EAL: request: mp_malloc_sync 00:05:23.040 EAL: No shared files mode enabled, IPC is disabled 00:05:23.040 EAL: Heap on socket 0 was expanded by 34MB 00:05:23.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.040 EAL: request: mp_malloc_sync 00:05:23.040 EAL: No shared files mode enabled, IPC is disabled 00:05:23.040 EAL: Heap on socket 0 was shrunk by 34MB 00:05:23.040 EAL: Trying to obtain current memory policy. 00:05:23.040 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.040 EAL: Restoring previous memory policy: 4 00:05:23.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.040 EAL: request: mp_malloc_sync 00:05:23.040 EAL: No shared files mode enabled, IPC is disabled 00:05:23.040 EAL: Heap on socket 0 was expanded by 66MB 00:05:23.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.040 EAL: request: mp_malloc_sync 00:05:23.040 EAL: No shared files mode enabled, IPC is disabled 00:05:23.040 EAL: Heap on socket 0 was shrunk by 66MB 00:05:23.040 EAL: Trying to obtain current memory policy. 00:05:23.040 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.040 EAL: Restoring previous memory policy: 4 00:05:23.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.040 EAL: request: mp_malloc_sync 00:05:23.040 EAL: No shared files mode enabled, IPC is disabled 00:05:23.040 EAL: Heap on socket 0 was expanded by 130MB 00:05:23.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.040 EAL: request: mp_malloc_sync 00:05:23.040 EAL: No shared files mode enabled, IPC is disabled 00:05:23.040 EAL: Heap on socket 0 was shrunk by 130MB 00:05:23.040 EAL: Trying to obtain current memory policy. 00:05:23.040 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.040 EAL: Restoring previous memory policy: 4 00:05:23.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.040 EAL: request: mp_malloc_sync 00:05:23.040 EAL: No shared files mode enabled, IPC is disabled 00:05:23.040 EAL: Heap on socket 0 was expanded by 258MB 00:05:23.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.299 EAL: request: mp_malloc_sync 00:05:23.299 EAL: No shared files mode enabled, IPC is disabled 00:05:23.299 EAL: Heap on socket 0 was shrunk by 258MB 00:05:23.299 EAL: Trying to obtain current memory policy. 00:05:23.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.299 EAL: Restoring previous memory policy: 4 00:05:23.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.299 EAL: request: mp_malloc_sync 00:05:23.299 EAL: No shared files mode enabled, IPC is disabled 00:05:23.299 EAL: Heap on socket 0 was expanded by 514MB 00:05:23.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.558 EAL: request: mp_malloc_sync 00:05:23.558 EAL: No shared files mode enabled, IPC is disabled 00:05:23.558 EAL: Heap on socket 0 was shrunk by 514MB 00:05:23.558 EAL: Trying to obtain current memory policy. 00:05:23.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.816 EAL: Restoring previous memory policy: 4 00:05:23.816 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.816 EAL: request: mp_malloc_sync 00:05:23.816 EAL: No shared files mode enabled, IPC is disabled 00:05:23.816 EAL: Heap on socket 0 was expanded by 1026MB 00:05:24.075 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.334 EAL: request: mp_malloc_sync 00:05:24.334 EAL: No shared files mode enabled, IPC is disabled 00:05:24.334 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:24.334 passed 00:05:24.334 00:05:24.334 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.334 suites 1 1 n/a 0 0 00:05:24.334 tests 2 2 2 0 0 00:05:24.334 asserts 497 497 497 0 n/a 00:05:24.334 00:05:24.334 Elapsed time = 1.301 seconds 00:05:24.334 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.334 EAL: request: mp_malloc_sync 00:05:24.334 EAL: No shared files mode enabled, IPC is disabled 00:05:24.334 EAL: Heap on socket 0 was shrunk by 2MB 00:05:24.334 EAL: No shared files mode enabled, IPC is disabled 00:05:24.334 EAL: No shared files mode enabled, IPC is disabled 00:05:24.334 EAL: No shared files mode enabled, IPC is disabled 00:05:24.334 00:05:24.334 real 0m1.418s 00:05:24.334 user 0m0.827s 00:05:24.334 sys 0m0.560s 00:05:24.334 02:46:34 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.334 02:46:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:24.334 ************************************ 00:05:24.334 END TEST env_vtophys 00:05:24.334 ************************************ 00:05:24.334 02:46:34 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:24.334 02:46:34 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.334 02:46:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.334 02:46:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.334 ************************************ 00:05:24.334 START TEST env_pci 00:05:24.334 ************************************ 00:05:24.334 02:46:34 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:24.334 00:05:24.334 00:05:24.334 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.334 http://cunit.sourceforge.net/ 00:05:24.334 00:05:24.334 00:05:24.334 Suite: pci 00:05:24.334 Test: pci_hook ...[2024-11-19 02:46:34.797480] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 98820 has claimed it 00:05:24.334 EAL: Cannot find device (10000:00:01.0) 00:05:24.334 EAL: Failed to attach device on primary process 00:05:24.334 passed 00:05:24.334 00:05:24.334 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.334 suites 1 1 n/a 0 0 00:05:24.334 tests 1 1 1 0 0 00:05:24.334 asserts 25 25 25 0 n/a 00:05:24.334 00:05:24.334 Elapsed time = 0.021 seconds 00:05:24.334 00:05:24.334 real 0m0.034s 00:05:24.334 user 0m0.012s 00:05:24.334 sys 0m0.022s 00:05:24.334 02:46:34 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.334 02:46:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:24.334 ************************************ 00:05:24.334 END TEST env_pci 00:05:24.334 ************************************ 00:05:24.334 02:46:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:24.334 02:46:34 env -- env/env.sh@15 -- # uname 00:05:24.334 02:46:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:24.334 02:46:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:24.334 02:46:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:24.334 02:46:34 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:24.334 02:46:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.334 02:46:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.334 ************************************ 00:05:24.334 START TEST env_dpdk_post_init 00:05:24.334 ************************************ 00:05:24.334 02:46:34 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:24.334 EAL: Detected CPU lcores: 48 00:05:24.334 EAL: Detected NUMA nodes: 2 00:05:24.334 EAL: Detected shared linkage of DPDK 00:05:24.334 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:24.334 EAL: Selected IOVA mode 'VA' 00:05:24.334 EAL: VFIO support initialized 00:05:24.334 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:24.594 EAL: Using IOMMU type 1 (Type 1) 00:05:24.594 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:24.594 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:24.594 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:24.594 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:24.594 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:24.594 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:24.594 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:24.594 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:24.594 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:24.594 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:24.595 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:24.595 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:24.595 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:24.595 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:24.595 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:24.595 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:25.534 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:28.818 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:28.818 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:28.818 Starting DPDK initialization... 00:05:28.818 Starting SPDK post initialization... 00:05:28.818 SPDK NVMe probe 00:05:28.818 Attaching to 0000:88:00.0 00:05:28.818 Attached to 0000:88:00.0 00:05:28.818 Cleaning up... 00:05:28.818 00:05:28.818 real 0m4.378s 00:05:28.818 user 0m3.255s 00:05:28.818 sys 0m0.185s 00:05:28.818 02:46:39 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.818 02:46:39 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.818 ************************************ 00:05:28.818 END TEST env_dpdk_post_init 00:05:28.818 ************************************ 00:05:28.818 02:46:39 env -- env/env.sh@26 -- # uname 00:05:28.818 02:46:39 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:28.818 02:46:39 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.818 02:46:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.818 02:46:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.818 02:46:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.818 ************************************ 00:05:28.818 START TEST env_mem_callbacks 00:05:28.818 ************************************ 00:05:28.818 02:46:39 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.818 EAL: Detected CPU lcores: 48 00:05:28.818 EAL: Detected NUMA nodes: 2 00:05:28.818 EAL: Detected shared linkage of DPDK 00:05:28.818 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.818 EAL: Selected IOVA mode 'VA' 00:05:28.818 EAL: VFIO support initialized 00:05:28.818 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:28.818 00:05:28.818 00:05:28.818 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.818 http://cunit.sourceforge.net/ 00:05:28.818 00:05:28.818 00:05:28.818 Suite: memory 00:05:28.818 Test: test ... 00:05:28.818 register 0x200000200000 2097152 00:05:28.818 malloc 3145728 00:05:28.818 register 0x200000400000 4194304 00:05:28.818 buf 0x200000500000 len 3145728 PASSED 00:05:28.818 malloc 64 00:05:28.818 buf 0x2000004fff40 len 64 PASSED 00:05:28.818 malloc 4194304 00:05:28.818 register 0x200000800000 6291456 00:05:28.818 buf 0x200000a00000 len 4194304 PASSED 00:05:28.818 free 0x200000500000 3145728 00:05:28.818 free 0x2000004fff40 64 00:05:28.818 unregister 0x200000400000 4194304 PASSED 00:05:28.818 free 0x200000a00000 4194304 00:05:28.818 unregister 0x200000800000 6291456 PASSED 00:05:28.818 malloc 8388608 00:05:28.818 register 0x200000400000 10485760 00:05:28.818 buf 0x200000600000 len 8388608 PASSED 00:05:28.818 free 0x200000600000 8388608 00:05:28.818 unregister 0x200000400000 10485760 PASSED 00:05:28.818 passed 00:05:28.818 00:05:28.818 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.818 suites 1 1 n/a 0 0 00:05:28.818 tests 1 1 1 0 0 00:05:28.818 asserts 15 15 15 0 n/a 00:05:28.818 00:05:28.818 Elapsed time = 0.005 seconds 00:05:28.818 00:05:28.818 real 0m0.044s 00:05:28.818 user 0m0.010s 00:05:28.818 sys 0m0.033s 00:05:28.818 02:46:39 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.818 02:46:39 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:28.818 ************************************ 00:05:28.818 END TEST env_mem_callbacks 00:05:28.818 ************************************ 00:05:28.818 00:05:28.818 real 0m6.418s 00:05:28.818 user 0m4.430s 00:05:28.818 sys 0m1.040s 00:05:28.818 02:46:39 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.818 02:46:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.818 ************************************ 00:05:28.818 END TEST env 00:05:28.818 ************************************ 00:05:28.818 02:46:39 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:28.818 02:46:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.818 02:46:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.818 02:46:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.818 ************************************ 00:05:28.818 START TEST rpc 00:05:28.818 ************************************ 00:05:28.818 02:46:39 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:29.077 * Looking for test storage... 00:05:29.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:29.077 02:46:39 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.077 02:46:39 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.077 02:46:39 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.077 02:46:39 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.077 02:46:39 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.077 02:46:39 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.077 02:46:39 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.077 02:46:39 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.077 02:46:39 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.077 02:46:39 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.077 02:46:39 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.077 02:46:39 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.077 02:46:39 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.077 02:46:39 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.077 02:46:39 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.077 02:46:39 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:29.077 02:46:39 rpc -- scripts/common.sh@345 -- # : 1 00:05:29.077 02:46:39 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.077 02:46:39 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.077 02:46:39 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:29.077 02:46:39 rpc -- scripts/common.sh@353 -- # local d=1 00:05:29.077 02:46:39 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.077 02:46:39 rpc -- scripts/common.sh@355 -- # echo 1 00:05:29.077 02:46:39 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.077 02:46:39 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:29.077 02:46:39 rpc -- scripts/common.sh@353 -- # local d=2 00:05:29.077 02:46:39 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.077 02:46:39 rpc -- scripts/common.sh@355 -- # echo 2 00:05:29.077 02:46:39 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.077 02:46:39 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.077 02:46:39 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.077 02:46:39 rpc -- scripts/common.sh@368 -- # return 0 00:05:29.077 02:46:39 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.077 02:46:39 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.077 --rc genhtml_branch_coverage=1 00:05:29.077 --rc genhtml_function_coverage=1 00:05:29.077 --rc genhtml_legend=1 00:05:29.077 --rc geninfo_all_blocks=1 00:05:29.077 --rc geninfo_unexecuted_blocks=1 00:05:29.077 00:05:29.077 ' 00:05:29.077 02:46:39 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.077 --rc genhtml_branch_coverage=1 00:05:29.077 --rc genhtml_function_coverage=1 00:05:29.077 --rc genhtml_legend=1 00:05:29.077 --rc geninfo_all_blocks=1 00:05:29.077 --rc geninfo_unexecuted_blocks=1 00:05:29.077 00:05:29.077 ' 00:05:29.077 02:46:39 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.077 --rc genhtml_branch_coverage=1 00:05:29.077 --rc genhtml_function_coverage=1 00:05:29.077 --rc genhtml_legend=1 00:05:29.077 --rc geninfo_all_blocks=1 00:05:29.077 --rc geninfo_unexecuted_blocks=1 00:05:29.077 00:05:29.077 ' 00:05:29.077 02:46:39 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.077 --rc genhtml_branch_coverage=1 00:05:29.077 --rc genhtml_function_coverage=1 00:05:29.077 --rc genhtml_legend=1 00:05:29.077 --rc geninfo_all_blocks=1 00:05:29.077 --rc geninfo_unexecuted_blocks=1 00:05:29.077 00:05:29.077 ' 00:05:29.077 02:46:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=99548 00:05:29.077 02:46:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.077 02:46:39 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:29.077 02:46:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 99548 00:05:29.077 02:46:39 rpc -- common/autotest_common.sh@835 -- # '[' -z 99548 ']' 00:05:29.077 02:46:39 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.077 02:46:39 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.077 02:46:39 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.077 02:46:39 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.077 02:46:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.077 [2024-11-19 02:46:39.623513] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:29.077 [2024-11-19 02:46:39.623596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99548 ] 00:05:29.077 [2024-11-19 02:46:39.690475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.336 [2024-11-19 02:46:39.738581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:29.336 [2024-11-19 02:46:39.738648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 99548' to capture a snapshot of events at runtime. 00:05:29.336 [2024-11-19 02:46:39.738675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:29.337 [2024-11-19 02:46:39.738686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:29.337 [2024-11-19 02:46:39.738706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid99548 for offline analysis/debug. 00:05:29.337 [2024-11-19 02:46:39.739357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.596 02:46:39 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.596 02:46:39 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:29.596 02:46:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:29.596 02:46:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:29.596 02:46:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:29.596 02:46:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:29.596 02:46:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.596 02:46:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.596 02:46:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.596 ************************************ 00:05:29.596 START TEST rpc_integrity 00:05:29.596 ************************************ 00:05:29.596 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:29.596 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:29.596 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.596 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.596 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.596 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:29.596 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:29.596 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:29.596 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:29.596 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.597 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:29.597 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.597 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:29.597 { 00:05:29.597 "name": "Malloc0", 00:05:29.597 "aliases": [ 00:05:29.597 "c6116834-143a-43fc-807d-5b59557cef41" 00:05:29.597 ], 00:05:29.597 "product_name": "Malloc disk", 00:05:29.597 "block_size": 512, 00:05:29.597 "num_blocks": 16384, 00:05:29.597 "uuid": "c6116834-143a-43fc-807d-5b59557cef41", 00:05:29.597 "assigned_rate_limits": { 00:05:29.597 "rw_ios_per_sec": 0, 00:05:29.597 "rw_mbytes_per_sec": 0, 00:05:29.597 "r_mbytes_per_sec": 0, 00:05:29.597 "w_mbytes_per_sec": 0 00:05:29.597 }, 00:05:29.597 "claimed": false, 00:05:29.597 "zoned": false, 00:05:29.597 "supported_io_types": { 00:05:29.597 "read": true, 00:05:29.597 "write": true, 00:05:29.597 "unmap": true, 00:05:29.597 "flush": true, 00:05:29.597 "reset": true, 00:05:29.597 "nvme_admin": false, 00:05:29.597 "nvme_io": false, 00:05:29.597 "nvme_io_md": false, 00:05:29.597 "write_zeroes": true, 00:05:29.597 "zcopy": true, 00:05:29.597 "get_zone_info": false, 00:05:29.597 "zone_management": false, 00:05:29.597 "zone_append": false, 00:05:29.597 "compare": false, 00:05:29.597 "compare_and_write": false, 00:05:29.597 "abort": true, 00:05:29.597 "seek_hole": false, 00:05:29.597 "seek_data": false, 00:05:29.597 "copy": true, 00:05:29.597 "nvme_iov_md": false 00:05:29.597 }, 00:05:29.597 "memory_domains": [ 00:05:29.597 { 00:05:29.597 "dma_device_id": "system", 00:05:29.597 "dma_device_type": 1 00:05:29.597 }, 00:05:29.597 { 00:05:29.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.597 "dma_device_type": 2 00:05:29.597 } 00:05:29.597 ], 00:05:29.597 "driver_specific": {} 00:05:29.597 } 00:05:29.597 ]' 00:05:29.597 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:29.597 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:29.597 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.597 [2024-11-19 02:46:40.120884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:29.597 [2024-11-19 02:46:40.120930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:29.597 [2024-11-19 02:46:40.120954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6e58e0 00:05:29.597 [2024-11-19 02:46:40.120968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:29.597 [2024-11-19 02:46:40.122350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:29.597 [2024-11-19 02:46:40.122372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:29.597 Passthru0 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.597 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.597 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:29.597 { 00:05:29.597 "name": "Malloc0", 00:05:29.597 "aliases": [ 00:05:29.597 "c6116834-143a-43fc-807d-5b59557cef41" 00:05:29.597 ], 00:05:29.597 "product_name": "Malloc disk", 00:05:29.597 "block_size": 512, 00:05:29.597 "num_blocks": 16384, 00:05:29.597 "uuid": "c6116834-143a-43fc-807d-5b59557cef41", 00:05:29.597 "assigned_rate_limits": { 00:05:29.597 "rw_ios_per_sec": 0, 00:05:29.597 "rw_mbytes_per_sec": 0, 00:05:29.597 "r_mbytes_per_sec": 0, 00:05:29.597 "w_mbytes_per_sec": 0 00:05:29.597 }, 00:05:29.597 "claimed": true, 00:05:29.597 "claim_type": "exclusive_write", 00:05:29.597 "zoned": false, 00:05:29.597 "supported_io_types": { 00:05:29.597 "read": true, 00:05:29.597 "write": true, 00:05:29.597 "unmap": true, 00:05:29.597 "flush": true, 00:05:29.597 "reset": true, 00:05:29.597 "nvme_admin": false, 00:05:29.597 "nvme_io": false, 00:05:29.597 "nvme_io_md": false, 00:05:29.597 "write_zeroes": true, 00:05:29.597 "zcopy": true, 00:05:29.597 "get_zone_info": false, 00:05:29.597 "zone_management": false, 00:05:29.597 "zone_append": false, 00:05:29.597 "compare": false, 00:05:29.597 "compare_and_write": false, 00:05:29.597 "abort": true, 00:05:29.597 "seek_hole": false, 00:05:29.597 "seek_data": false, 00:05:29.597 "copy": true, 00:05:29.597 "nvme_iov_md": false 00:05:29.597 }, 00:05:29.597 "memory_domains": [ 00:05:29.597 { 00:05:29.597 "dma_device_id": "system", 00:05:29.597 "dma_device_type": 1 00:05:29.597 }, 00:05:29.597 { 00:05:29.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.597 "dma_device_type": 2 00:05:29.597 } 00:05:29.597 ], 00:05:29.597 "driver_specific": {} 00:05:29.597 }, 00:05:29.597 { 00:05:29.597 "name": "Passthru0", 00:05:29.597 "aliases": [ 00:05:29.597 "d2eecff5-b053-5b24-b250-27f8dd1fb9f3" 00:05:29.597 ], 00:05:29.597 "product_name": "passthru", 00:05:29.597 "block_size": 512, 00:05:29.597 "num_blocks": 16384, 00:05:29.597 "uuid": "d2eecff5-b053-5b24-b250-27f8dd1fb9f3", 00:05:29.597 "assigned_rate_limits": { 00:05:29.597 "rw_ios_per_sec": 0, 00:05:29.597 "rw_mbytes_per_sec": 0, 00:05:29.597 "r_mbytes_per_sec": 0, 00:05:29.597 "w_mbytes_per_sec": 0 00:05:29.597 }, 00:05:29.597 "claimed": false, 00:05:29.597 "zoned": false, 00:05:29.597 "supported_io_types": { 00:05:29.597 "read": true, 00:05:29.597 "write": true, 00:05:29.597 "unmap": true, 00:05:29.597 "flush": true, 00:05:29.597 "reset": true, 00:05:29.597 "nvme_admin": false, 00:05:29.597 "nvme_io": false, 00:05:29.597 "nvme_io_md": false, 00:05:29.597 "write_zeroes": true, 00:05:29.597 "zcopy": true, 00:05:29.597 "get_zone_info": false, 00:05:29.597 "zone_management": false, 00:05:29.597 "zone_append": false, 00:05:29.597 "compare": false, 00:05:29.597 "compare_and_write": false, 00:05:29.597 "abort": true, 00:05:29.597 "seek_hole": false, 00:05:29.597 "seek_data": false, 00:05:29.597 "copy": true, 00:05:29.597 "nvme_iov_md": false 00:05:29.597 }, 00:05:29.597 "memory_domains": [ 00:05:29.597 { 00:05:29.597 "dma_device_id": "system", 00:05:29.597 "dma_device_type": 1 00:05:29.597 }, 00:05:29.597 { 00:05:29.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.597 "dma_device_type": 2 00:05:29.597 } 00:05:29.597 ], 00:05:29.597 "driver_specific": { 00:05:29.597 "passthru": { 00:05:29.597 "name": "Passthru0", 00:05:29.597 "base_bdev_name": "Malloc0" 00:05:29.597 } 00:05:29.597 } 00:05:29.597 } 00:05:29.597 ]' 00:05:29.597 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:29.597 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:29.597 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.597 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.597 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.597 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.597 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:29.597 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:29.857 02:46:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:29.857 00:05:29.857 real 0m0.215s 00:05:29.857 user 0m0.141s 00:05:29.857 sys 0m0.017s 00:05:29.857 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.857 02:46:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.857 ************************************ 00:05:29.857 END TEST rpc_integrity 00:05:29.857 ************************************ 00:05:29.857 02:46:40 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:29.857 02:46:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.857 02:46:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.857 02:46:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.857 ************************************ 00:05:29.857 START TEST rpc_plugins 00:05:29.857 ************************************ 00:05:29.857 02:46:40 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:29.857 02:46:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:29.857 02:46:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.857 02:46:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.857 02:46:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.857 02:46:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:29.857 02:46:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:29.857 02:46:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.857 02:46:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.857 02:46:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.857 02:46:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:29.857 { 00:05:29.857 "name": "Malloc1", 00:05:29.857 "aliases": [ 00:05:29.857 "a8705c88-5f94-46a4-8e31-8cebda62ce2c" 00:05:29.857 ], 00:05:29.857 "product_name": "Malloc disk", 00:05:29.857 "block_size": 4096, 00:05:29.857 "num_blocks": 256, 00:05:29.857 "uuid": "a8705c88-5f94-46a4-8e31-8cebda62ce2c", 00:05:29.857 "assigned_rate_limits": { 00:05:29.857 "rw_ios_per_sec": 0, 00:05:29.857 "rw_mbytes_per_sec": 0, 00:05:29.857 "r_mbytes_per_sec": 0, 00:05:29.857 "w_mbytes_per_sec": 0 00:05:29.857 }, 00:05:29.857 "claimed": false, 00:05:29.857 "zoned": false, 00:05:29.857 "supported_io_types": { 00:05:29.857 "read": true, 00:05:29.857 "write": true, 00:05:29.857 "unmap": true, 00:05:29.857 "flush": true, 00:05:29.857 "reset": true, 00:05:29.857 "nvme_admin": false, 00:05:29.857 "nvme_io": false, 00:05:29.857 "nvme_io_md": false, 00:05:29.857 "write_zeroes": true, 00:05:29.857 "zcopy": true, 00:05:29.857 "get_zone_info": false, 00:05:29.857 "zone_management": false, 00:05:29.857 "zone_append": false, 00:05:29.857 "compare": false, 00:05:29.858 "compare_and_write": false, 00:05:29.858 "abort": true, 00:05:29.858 "seek_hole": false, 00:05:29.858 "seek_data": false, 00:05:29.858 "copy": true, 00:05:29.858 "nvme_iov_md": false 00:05:29.858 }, 00:05:29.858 "memory_domains": [ 00:05:29.858 { 00:05:29.858 "dma_device_id": "system", 00:05:29.858 "dma_device_type": 1 00:05:29.858 }, 00:05:29.858 { 00:05:29.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.858 "dma_device_type": 2 00:05:29.858 } 00:05:29.858 ], 00:05:29.858 "driver_specific": {} 00:05:29.858 } 00:05:29.858 ]' 00:05:29.858 02:46:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:29.858 02:46:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:29.858 02:46:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:29.858 02:46:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.858 02:46:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.858 02:46:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.858 02:46:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:29.858 02:46:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.858 02:46:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.858 02:46:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.858 02:46:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:29.858 02:46:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:29.858 02:46:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:29.858 00:05:29.858 real 0m0.109s 00:05:29.858 user 0m0.070s 00:05:29.858 sys 0m0.009s 00:05:29.858 02:46:40 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.858 02:46:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.858 ************************************ 00:05:29.858 END TEST rpc_plugins 00:05:29.858 ************************************ 00:05:29.858 02:46:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:29.858 02:46:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.858 02:46:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.858 02:46:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.858 ************************************ 00:05:29.858 START TEST rpc_trace_cmd_test 00:05:29.858 ************************************ 00:05:29.858 02:46:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:29.858 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:29.858 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:29.858 02:46:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.858 02:46:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:29.858 02:46:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.858 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:29.858 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid99548", 00:05:29.858 "tpoint_group_mask": "0x8", 00:05:29.858 "iscsi_conn": { 00:05:29.858 "mask": "0x2", 00:05:29.858 "tpoint_mask": "0x0" 00:05:29.858 }, 00:05:29.858 "scsi": { 00:05:29.858 "mask": "0x4", 00:05:29.858 "tpoint_mask": "0x0" 00:05:29.858 }, 00:05:29.858 "bdev": { 00:05:29.858 "mask": "0x8", 00:05:29.858 "tpoint_mask": "0xffffffffffffffff" 00:05:29.858 }, 00:05:29.858 "nvmf_rdma": { 00:05:29.858 "mask": "0x10", 00:05:29.858 "tpoint_mask": "0x0" 00:05:29.858 }, 00:05:29.858 "nvmf_tcp": { 00:05:29.858 "mask": "0x20", 00:05:29.858 "tpoint_mask": "0x0" 00:05:29.858 }, 00:05:29.858 "ftl": { 00:05:29.858 "mask": "0x40", 00:05:29.858 "tpoint_mask": "0x0" 00:05:29.858 }, 00:05:29.858 "blobfs": { 00:05:29.858 "mask": "0x80", 00:05:29.858 "tpoint_mask": "0x0" 00:05:29.858 }, 00:05:29.858 "dsa": { 00:05:29.858 "mask": "0x200", 00:05:29.858 "tpoint_mask": "0x0" 00:05:29.858 }, 00:05:29.858 "thread": { 00:05:29.858 "mask": "0x400", 00:05:29.858 "tpoint_mask": "0x0" 00:05:29.858 }, 00:05:29.858 "nvme_pcie": { 00:05:29.858 "mask": "0x800", 00:05:29.858 "tpoint_mask": "0x0" 00:05:29.858 }, 00:05:29.858 "iaa": { 00:05:29.858 "mask": "0x1000", 00:05:29.858 "tpoint_mask": "0x0" 00:05:29.858 }, 00:05:29.858 "nvme_tcp": { 00:05:29.858 "mask": "0x2000", 00:05:29.858 "tpoint_mask": "0x0" 00:05:29.858 }, 00:05:29.858 "bdev_nvme": { 00:05:29.858 "mask": "0x4000", 00:05:29.858 "tpoint_mask": "0x0" 00:05:29.858 }, 00:05:29.858 "sock": { 00:05:29.858 "mask": "0x8000", 00:05:29.858 "tpoint_mask": "0x0" 00:05:29.858 }, 00:05:29.858 "blob": { 00:05:29.858 "mask": "0x10000", 00:05:29.858 "tpoint_mask": "0x0" 00:05:29.858 }, 00:05:29.858 "bdev_raid": { 00:05:29.858 "mask": "0x20000", 00:05:29.858 "tpoint_mask": "0x0" 00:05:29.858 }, 00:05:29.858 "scheduler": { 00:05:29.858 "mask": "0x40000", 00:05:29.858 "tpoint_mask": "0x0" 00:05:29.858 } 00:05:29.858 }' 00:05:29.858 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:30.118 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:30.118 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:30.118 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:30.118 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:30.118 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:30.118 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:30.118 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:30.118 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:30.118 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:30.118 00:05:30.118 real 0m0.187s 00:05:30.118 user 0m0.161s 00:05:30.118 sys 0m0.017s 00:05:30.118 02:46:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.118 02:46:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.118 ************************************ 00:05:30.118 END TEST rpc_trace_cmd_test 00:05:30.118 ************************************ 00:05:30.118 02:46:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:30.118 02:46:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:30.118 02:46:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:30.118 02:46:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.118 02:46:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.118 02:46:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.118 ************************************ 00:05:30.118 START TEST rpc_daemon_integrity 00:05:30.118 ************************************ 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.118 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.118 { 00:05:30.118 "name": "Malloc2", 00:05:30.118 "aliases": [ 00:05:30.118 "85b6d3de-be9e-43cb-8c97-87a823b73720" 00:05:30.118 ], 00:05:30.118 "product_name": "Malloc disk", 00:05:30.118 "block_size": 512, 00:05:30.118 "num_blocks": 16384, 00:05:30.118 "uuid": "85b6d3de-be9e-43cb-8c97-87a823b73720", 00:05:30.118 "assigned_rate_limits": { 00:05:30.118 "rw_ios_per_sec": 0, 00:05:30.118 "rw_mbytes_per_sec": 0, 00:05:30.118 "r_mbytes_per_sec": 0, 00:05:30.118 "w_mbytes_per_sec": 0 00:05:30.118 }, 00:05:30.118 "claimed": false, 00:05:30.118 "zoned": false, 00:05:30.118 "supported_io_types": { 00:05:30.118 "read": true, 00:05:30.118 "write": true, 00:05:30.118 "unmap": true, 00:05:30.118 "flush": true, 00:05:30.118 "reset": true, 00:05:30.118 "nvme_admin": false, 00:05:30.118 "nvme_io": false, 00:05:30.118 "nvme_io_md": false, 00:05:30.118 "write_zeroes": true, 00:05:30.118 "zcopy": true, 00:05:30.118 "get_zone_info": false, 00:05:30.118 "zone_management": false, 00:05:30.118 "zone_append": false, 00:05:30.118 "compare": false, 00:05:30.118 "compare_and_write": false, 00:05:30.118 "abort": true, 00:05:30.118 "seek_hole": false, 00:05:30.118 "seek_data": false, 00:05:30.118 "copy": true, 00:05:30.118 "nvme_iov_md": false 00:05:30.118 }, 00:05:30.119 "memory_domains": [ 00:05:30.119 { 00:05:30.119 "dma_device_id": "system", 00:05:30.119 "dma_device_type": 1 00:05:30.119 }, 00:05:30.119 { 00:05:30.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.119 "dma_device_type": 2 00:05:30.119 } 00:05:30.119 ], 00:05:30.119 "driver_specific": {} 00:05:30.119 } 00:05:30.119 ]' 00:05:30.119 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.378 [2024-11-19 02:46:40.778864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:30.378 [2024-11-19 02:46:40.778918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.378 [2024-11-19 02:46:40.778941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8156f0 00:05:30.378 [2024-11-19 02:46:40.778953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.378 [2024-11-19 02:46:40.780146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.378 [2024-11-19 02:46:40.780167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.378 Passthru0 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.378 { 00:05:30.378 "name": "Malloc2", 00:05:30.378 "aliases": [ 00:05:30.378 "85b6d3de-be9e-43cb-8c97-87a823b73720" 00:05:30.378 ], 00:05:30.378 "product_name": "Malloc disk", 00:05:30.378 "block_size": 512, 00:05:30.378 "num_blocks": 16384, 00:05:30.378 "uuid": "85b6d3de-be9e-43cb-8c97-87a823b73720", 00:05:30.378 "assigned_rate_limits": { 00:05:30.378 "rw_ios_per_sec": 0, 00:05:30.378 "rw_mbytes_per_sec": 0, 00:05:30.378 "r_mbytes_per_sec": 0, 00:05:30.378 "w_mbytes_per_sec": 0 00:05:30.378 }, 00:05:30.378 "claimed": true, 00:05:30.378 "claim_type": "exclusive_write", 00:05:30.378 "zoned": false, 00:05:30.378 "supported_io_types": { 00:05:30.378 "read": true, 00:05:30.378 "write": true, 00:05:30.378 "unmap": true, 00:05:30.378 "flush": true, 00:05:30.378 "reset": true, 00:05:30.378 "nvme_admin": false, 00:05:30.378 "nvme_io": false, 00:05:30.378 "nvme_io_md": false, 00:05:30.378 "write_zeroes": true, 00:05:30.378 "zcopy": true, 00:05:30.378 "get_zone_info": false, 00:05:30.378 "zone_management": false, 00:05:30.378 "zone_append": false, 00:05:30.378 "compare": false, 00:05:30.378 "compare_and_write": false, 00:05:30.378 "abort": true, 00:05:30.378 "seek_hole": false, 00:05:30.378 "seek_data": false, 00:05:30.378 "copy": true, 00:05:30.378 "nvme_iov_md": false 00:05:30.378 }, 00:05:30.378 "memory_domains": [ 00:05:30.378 { 00:05:30.378 "dma_device_id": "system", 00:05:30.378 "dma_device_type": 1 00:05:30.378 }, 00:05:30.378 { 00:05:30.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.378 "dma_device_type": 2 00:05:30.378 } 00:05:30.378 ], 00:05:30.378 "driver_specific": {} 00:05:30.378 }, 00:05:30.378 { 00:05:30.378 "name": "Passthru0", 00:05:30.378 "aliases": [ 00:05:30.378 "e33de07b-b02d-5eb5-96bc-a746a91b0ab4" 00:05:30.378 ], 00:05:30.378 "product_name": "passthru", 00:05:30.378 "block_size": 512, 00:05:30.378 "num_blocks": 16384, 00:05:30.378 "uuid": "e33de07b-b02d-5eb5-96bc-a746a91b0ab4", 00:05:30.378 "assigned_rate_limits": { 00:05:30.378 "rw_ios_per_sec": 0, 00:05:30.378 "rw_mbytes_per_sec": 0, 00:05:30.378 "r_mbytes_per_sec": 0, 00:05:30.378 "w_mbytes_per_sec": 0 00:05:30.378 }, 00:05:30.378 "claimed": false, 00:05:30.378 "zoned": false, 00:05:30.378 "supported_io_types": { 00:05:30.378 "read": true, 00:05:30.378 "write": true, 00:05:30.378 "unmap": true, 00:05:30.378 "flush": true, 00:05:30.378 "reset": true, 00:05:30.378 "nvme_admin": false, 00:05:30.378 "nvme_io": false, 00:05:30.378 "nvme_io_md": false, 00:05:30.378 "write_zeroes": true, 00:05:30.378 "zcopy": true, 00:05:30.378 "get_zone_info": false, 00:05:30.378 "zone_management": false, 00:05:30.378 "zone_append": false, 00:05:30.378 "compare": false, 00:05:30.378 "compare_and_write": false, 00:05:30.378 "abort": true, 00:05:30.378 "seek_hole": false, 00:05:30.378 "seek_data": false, 00:05:30.378 "copy": true, 00:05:30.378 "nvme_iov_md": false 00:05:30.378 }, 00:05:30.378 "memory_domains": [ 00:05:30.378 { 00:05:30.378 "dma_device_id": "system", 00:05:30.378 "dma_device_type": 1 00:05:30.378 }, 00:05:30.378 { 00:05:30.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.378 "dma_device_type": 2 00:05:30.378 } 00:05:30.378 ], 00:05:30.378 "driver_specific": { 00:05:30.378 "passthru": { 00:05:30.378 "name": "Passthru0", 00:05:30.378 "base_bdev_name": "Malloc2" 00:05:30.378 } 00:05:30.378 } 00:05:30.378 } 00:05:30.378 ]' 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.378 00:05:30.378 real 0m0.224s 00:05:30.378 user 0m0.143s 00:05:30.378 sys 0m0.021s 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.378 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.378 ************************************ 00:05:30.378 END TEST rpc_daemon_integrity 00:05:30.378 ************************************ 00:05:30.378 02:46:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:30.379 02:46:40 rpc -- rpc/rpc.sh@84 -- # killprocess 99548 00:05:30.379 02:46:40 rpc -- common/autotest_common.sh@954 -- # '[' -z 99548 ']' 00:05:30.379 02:46:40 rpc -- common/autotest_common.sh@958 -- # kill -0 99548 00:05:30.379 02:46:40 rpc -- common/autotest_common.sh@959 -- # uname 00:05:30.379 02:46:40 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.379 02:46:40 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99548 00:05:30.379 02:46:40 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.379 02:46:40 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.379 02:46:40 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99548' 00:05:30.379 killing process with pid 99548 00:05:30.379 02:46:40 rpc -- common/autotest_common.sh@973 -- # kill 99548 00:05:30.379 02:46:40 rpc -- common/autotest_common.sh@978 -- # wait 99548 00:05:30.945 00:05:30.945 real 0m1.912s 00:05:30.945 user 0m2.369s 00:05:30.945 sys 0m0.612s 00:05:30.945 02:46:41 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.945 02:46:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.945 ************************************ 00:05:30.945 END TEST rpc 00:05:30.945 ************************************ 00:05:30.945 02:46:41 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:30.945 02:46:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.945 02:46:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.945 02:46:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.945 ************************************ 00:05:30.945 START TEST skip_rpc 00:05:30.945 ************************************ 00:05:30.945 02:46:41 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:30.945 * Looking for test storage... 00:05:30.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:30.945 02:46:41 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.945 02:46:41 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.945 02:46:41 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.945 02:46:41 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.945 02:46:41 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.945 02:46:41 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.945 02:46:41 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.945 02:46:41 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.945 02:46:41 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.946 02:46:41 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:30.946 02:46:41 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.946 02:46:41 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.946 --rc genhtml_branch_coverage=1 00:05:30.946 --rc genhtml_function_coverage=1 00:05:30.946 --rc genhtml_legend=1 00:05:30.946 --rc geninfo_all_blocks=1 00:05:30.946 --rc geninfo_unexecuted_blocks=1 00:05:30.946 00:05:30.946 ' 00:05:30.946 02:46:41 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.946 --rc genhtml_branch_coverage=1 00:05:30.946 --rc genhtml_function_coverage=1 00:05:30.946 --rc genhtml_legend=1 00:05:30.946 --rc geninfo_all_blocks=1 00:05:30.946 --rc geninfo_unexecuted_blocks=1 00:05:30.946 00:05:30.946 ' 00:05:30.946 02:46:41 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.946 --rc genhtml_branch_coverage=1 00:05:30.946 --rc genhtml_function_coverage=1 00:05:30.946 --rc genhtml_legend=1 00:05:30.946 --rc geninfo_all_blocks=1 00:05:30.946 --rc geninfo_unexecuted_blocks=1 00:05:30.946 00:05:30.946 ' 00:05:30.946 02:46:41 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.946 --rc genhtml_branch_coverage=1 00:05:30.946 --rc genhtml_function_coverage=1 00:05:30.946 --rc genhtml_legend=1 00:05:30.946 --rc geninfo_all_blocks=1 00:05:30.946 --rc geninfo_unexecuted_blocks=1 00:05:30.946 00:05:30.946 ' 00:05:30.946 02:46:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:30.946 02:46:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:30.946 02:46:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:30.946 02:46:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.946 02:46:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.946 02:46:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.946 ************************************ 00:05:30.946 START TEST skip_rpc 00:05:30.946 ************************************ 00:05:30.946 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:30.946 02:46:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=99931 00:05:30.946 02:46:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.946 02:46:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:30.946 02:46:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:31.205 [2024-11-19 02:46:41.606930] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:31.205 [2024-11-19 02:46:41.607013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99931 ] 00:05:31.205 [2024-11-19 02:46:41.671845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.205 [2024-11-19 02:46:41.717304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.470 02:46:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:36.470 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 99931 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 99931 ']' 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 99931 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99931 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99931' 00:05:36.471 killing process with pid 99931 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 99931 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 99931 00:05:36.471 00:05:36.471 real 0m5.432s 00:05:36.471 user 0m5.136s 00:05:36.471 sys 0m0.306s 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.471 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.471 ************************************ 00:05:36.471 END TEST skip_rpc 00:05:36.471 ************************************ 00:05:36.471 02:46:47 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:36.471 02:46:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.471 02:46:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.471 02:46:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.471 ************************************ 00:05:36.471 START TEST skip_rpc_with_json 00:05:36.471 ************************************ 00:05:36.471 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:36.471 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:36.471 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=100619 00:05:36.471 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.471 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.471 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 100619 00:05:36.471 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 100619 ']' 00:05:36.471 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.471 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.471 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.471 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.471 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.471 [2024-11-19 02:46:47.082156] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:36.471 [2024-11-19 02:46:47.082229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100619 ] 00:05:36.729 [2024-11-19 02:46:47.150524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.729 [2024-11-19 02:46:47.193821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.988 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.988 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:36.988 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:36.988 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.988 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.988 [2024-11-19 02:46:47.448032] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:36.988 request: 00:05:36.988 { 00:05:36.988 "trtype": "tcp", 00:05:36.988 "method": "nvmf_get_transports", 00:05:36.988 "req_id": 1 00:05:36.988 } 00:05:36.988 Got JSON-RPC error response 00:05:36.988 response: 00:05:36.988 { 00:05:36.988 "code": -19, 00:05:36.988 "message": "No such device" 00:05:36.988 } 00:05:36.988 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:36.988 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:36.988 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.988 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.988 [2024-11-19 02:46:47.456151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:36.988 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.988 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:36.988 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.988 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.246 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.246 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:37.246 { 00:05:37.246 "subsystems": [ 00:05:37.246 { 00:05:37.246 "subsystem": "fsdev", 00:05:37.246 "config": [ 00:05:37.246 { 00:05:37.246 "method": "fsdev_set_opts", 00:05:37.246 "params": { 00:05:37.246 "fsdev_io_pool_size": 65535, 00:05:37.246 "fsdev_io_cache_size": 256 00:05:37.246 } 00:05:37.246 } 00:05:37.246 ] 00:05:37.246 }, 00:05:37.246 { 00:05:37.246 "subsystem": "vfio_user_target", 00:05:37.246 "config": null 00:05:37.246 }, 00:05:37.246 { 00:05:37.246 "subsystem": "keyring", 00:05:37.246 "config": [] 00:05:37.246 }, 00:05:37.246 { 00:05:37.246 "subsystem": "iobuf", 00:05:37.246 "config": [ 00:05:37.246 { 00:05:37.246 "method": "iobuf_set_options", 00:05:37.246 "params": { 00:05:37.246 "small_pool_count": 8192, 00:05:37.246 "large_pool_count": 1024, 00:05:37.246 "small_bufsize": 8192, 00:05:37.246 "large_bufsize": 135168, 00:05:37.246 "enable_numa": false 00:05:37.246 } 00:05:37.246 } 00:05:37.246 ] 00:05:37.246 }, 00:05:37.246 { 00:05:37.246 "subsystem": "sock", 00:05:37.246 "config": [ 00:05:37.246 { 00:05:37.246 "method": "sock_set_default_impl", 00:05:37.246 "params": { 00:05:37.246 "impl_name": "posix" 00:05:37.246 } 00:05:37.246 }, 00:05:37.246 { 00:05:37.246 "method": "sock_impl_set_options", 00:05:37.246 "params": { 00:05:37.246 "impl_name": "ssl", 00:05:37.246 "recv_buf_size": 4096, 00:05:37.246 "send_buf_size": 4096, 00:05:37.246 "enable_recv_pipe": true, 00:05:37.246 "enable_quickack": false, 00:05:37.246 "enable_placement_id": 0, 00:05:37.246 "enable_zerocopy_send_server": true, 00:05:37.246 "enable_zerocopy_send_client": false, 00:05:37.246 "zerocopy_threshold": 0, 00:05:37.246 "tls_version": 0, 00:05:37.246 "enable_ktls": false 00:05:37.246 } 00:05:37.246 }, 00:05:37.246 { 00:05:37.246 "method": "sock_impl_set_options", 00:05:37.246 "params": { 00:05:37.246 "impl_name": "posix", 00:05:37.246 "recv_buf_size": 2097152, 00:05:37.246 "send_buf_size": 2097152, 00:05:37.246 "enable_recv_pipe": true, 00:05:37.246 "enable_quickack": false, 00:05:37.246 "enable_placement_id": 0, 00:05:37.246 "enable_zerocopy_send_server": true, 00:05:37.246 "enable_zerocopy_send_client": false, 00:05:37.246 "zerocopy_threshold": 0, 00:05:37.246 "tls_version": 0, 00:05:37.246 "enable_ktls": false 00:05:37.246 } 00:05:37.246 } 00:05:37.246 ] 00:05:37.246 }, 00:05:37.246 { 00:05:37.246 "subsystem": "vmd", 00:05:37.246 "config": [] 00:05:37.246 }, 00:05:37.246 { 00:05:37.246 "subsystem": "accel", 00:05:37.246 "config": [ 00:05:37.246 { 00:05:37.246 "method": "accel_set_options", 00:05:37.246 "params": { 00:05:37.246 "small_cache_size": 128, 00:05:37.246 "large_cache_size": 16, 00:05:37.246 "task_count": 2048, 00:05:37.246 "sequence_count": 2048, 00:05:37.246 "buf_count": 2048 00:05:37.246 } 00:05:37.246 } 00:05:37.246 ] 00:05:37.246 }, 00:05:37.246 { 00:05:37.246 "subsystem": "bdev", 00:05:37.246 "config": [ 00:05:37.246 { 00:05:37.246 "method": "bdev_set_options", 00:05:37.246 "params": { 00:05:37.246 "bdev_io_pool_size": 65535, 00:05:37.246 "bdev_io_cache_size": 256, 00:05:37.246 "bdev_auto_examine": true, 00:05:37.246 "iobuf_small_cache_size": 128, 00:05:37.246 "iobuf_large_cache_size": 16 00:05:37.246 } 00:05:37.246 }, 00:05:37.246 { 00:05:37.246 "method": "bdev_raid_set_options", 00:05:37.246 "params": { 00:05:37.246 "process_window_size_kb": 1024, 00:05:37.246 "process_max_bandwidth_mb_sec": 0 00:05:37.246 } 00:05:37.246 }, 00:05:37.246 { 00:05:37.246 "method": "bdev_iscsi_set_options", 00:05:37.246 "params": { 00:05:37.246 "timeout_sec": 30 00:05:37.246 } 00:05:37.246 }, 00:05:37.246 { 00:05:37.246 "method": "bdev_nvme_set_options", 00:05:37.246 "params": { 00:05:37.246 "action_on_timeout": "none", 00:05:37.246 "timeout_us": 0, 00:05:37.246 "timeout_admin_us": 0, 00:05:37.246 "keep_alive_timeout_ms": 10000, 00:05:37.246 "arbitration_burst": 0, 00:05:37.246 "low_priority_weight": 0, 00:05:37.246 "medium_priority_weight": 0, 00:05:37.246 "high_priority_weight": 0, 00:05:37.246 "nvme_adminq_poll_period_us": 10000, 00:05:37.246 "nvme_ioq_poll_period_us": 0, 00:05:37.246 "io_queue_requests": 0, 00:05:37.246 "delay_cmd_submit": true, 00:05:37.246 "transport_retry_count": 4, 00:05:37.246 "bdev_retry_count": 3, 00:05:37.247 "transport_ack_timeout": 0, 00:05:37.247 "ctrlr_loss_timeout_sec": 0, 00:05:37.247 "reconnect_delay_sec": 0, 00:05:37.247 "fast_io_fail_timeout_sec": 0, 00:05:37.247 "disable_auto_failback": false, 00:05:37.247 "generate_uuids": false, 00:05:37.247 "transport_tos": 0, 00:05:37.247 "nvme_error_stat": false, 00:05:37.247 "rdma_srq_size": 0, 00:05:37.247 "io_path_stat": false, 00:05:37.247 "allow_accel_sequence": false, 00:05:37.247 "rdma_max_cq_size": 0, 00:05:37.247 "rdma_cm_event_timeout_ms": 0, 00:05:37.247 "dhchap_digests": [ 00:05:37.247 "sha256", 00:05:37.247 "sha384", 00:05:37.247 "sha512" 00:05:37.247 ], 00:05:37.247 "dhchap_dhgroups": [ 00:05:37.247 "null", 00:05:37.247 "ffdhe2048", 00:05:37.247 "ffdhe3072", 00:05:37.247 "ffdhe4096", 00:05:37.247 "ffdhe6144", 00:05:37.247 "ffdhe8192" 00:05:37.247 ] 00:05:37.247 } 00:05:37.247 }, 00:05:37.247 { 00:05:37.247 "method": "bdev_nvme_set_hotplug", 00:05:37.247 "params": { 00:05:37.247 "period_us": 100000, 00:05:37.247 "enable": false 00:05:37.247 } 00:05:37.247 }, 00:05:37.247 { 00:05:37.247 "method": "bdev_wait_for_examine" 00:05:37.247 } 00:05:37.247 ] 00:05:37.247 }, 00:05:37.247 { 00:05:37.247 "subsystem": "scsi", 00:05:37.247 "config": null 00:05:37.247 }, 00:05:37.247 { 00:05:37.247 "subsystem": "scheduler", 00:05:37.247 "config": [ 00:05:37.247 { 00:05:37.247 "method": "framework_set_scheduler", 00:05:37.247 "params": { 00:05:37.247 "name": "static" 00:05:37.247 } 00:05:37.247 } 00:05:37.247 ] 00:05:37.247 }, 00:05:37.247 { 00:05:37.247 "subsystem": "vhost_scsi", 00:05:37.247 "config": [] 00:05:37.247 }, 00:05:37.247 { 00:05:37.247 "subsystem": "vhost_blk", 00:05:37.247 "config": [] 00:05:37.247 }, 00:05:37.247 { 00:05:37.247 "subsystem": "ublk", 00:05:37.247 "config": [] 00:05:37.247 }, 00:05:37.247 { 00:05:37.247 "subsystem": "nbd", 00:05:37.247 "config": [] 00:05:37.247 }, 00:05:37.247 { 00:05:37.247 "subsystem": "nvmf", 00:05:37.247 "config": [ 00:05:37.247 { 00:05:37.247 "method": "nvmf_set_config", 00:05:37.247 "params": { 00:05:37.247 "discovery_filter": "match_any", 00:05:37.247 "admin_cmd_passthru": { 00:05:37.247 "identify_ctrlr": false 00:05:37.247 }, 00:05:37.247 "dhchap_digests": [ 00:05:37.247 "sha256", 00:05:37.247 "sha384", 00:05:37.247 "sha512" 00:05:37.247 ], 00:05:37.247 "dhchap_dhgroups": [ 00:05:37.247 "null", 00:05:37.247 "ffdhe2048", 00:05:37.247 "ffdhe3072", 00:05:37.247 "ffdhe4096", 00:05:37.247 "ffdhe6144", 00:05:37.247 "ffdhe8192" 00:05:37.247 ] 00:05:37.247 } 00:05:37.247 }, 00:05:37.247 { 00:05:37.247 "method": "nvmf_set_max_subsystems", 00:05:37.247 "params": { 00:05:37.247 "max_subsystems": 1024 00:05:37.247 } 00:05:37.247 }, 00:05:37.247 { 00:05:37.247 "method": "nvmf_set_crdt", 00:05:37.247 "params": { 00:05:37.247 "crdt1": 0, 00:05:37.247 "crdt2": 0, 00:05:37.247 "crdt3": 0 00:05:37.247 } 00:05:37.247 }, 00:05:37.247 { 00:05:37.247 "method": "nvmf_create_transport", 00:05:37.247 "params": { 00:05:37.247 "trtype": "TCP", 00:05:37.247 "max_queue_depth": 128, 00:05:37.247 "max_io_qpairs_per_ctrlr": 127, 00:05:37.247 "in_capsule_data_size": 4096, 00:05:37.247 "max_io_size": 131072, 00:05:37.247 "io_unit_size": 131072, 00:05:37.247 "max_aq_depth": 128, 00:05:37.247 "num_shared_buffers": 511, 00:05:37.247 "buf_cache_size": 4294967295, 00:05:37.247 "dif_insert_or_strip": false, 00:05:37.247 "zcopy": false, 00:05:37.247 "c2h_success": true, 00:05:37.247 "sock_priority": 0, 00:05:37.247 "abort_timeout_sec": 1, 00:05:37.247 "ack_timeout": 0, 00:05:37.247 "data_wr_pool_size": 0 00:05:37.247 } 00:05:37.247 } 00:05:37.247 ] 00:05:37.247 }, 00:05:37.247 { 00:05:37.247 "subsystem": "iscsi", 00:05:37.247 "config": [ 00:05:37.247 { 00:05:37.247 "method": "iscsi_set_options", 00:05:37.247 "params": { 00:05:37.247 "node_base": "iqn.2016-06.io.spdk", 00:05:37.247 "max_sessions": 128, 00:05:37.247 "max_connections_per_session": 2, 00:05:37.247 "max_queue_depth": 64, 00:05:37.247 "default_time2wait": 2, 00:05:37.247 "default_time2retain": 20, 00:05:37.247 "first_burst_length": 8192, 00:05:37.247 "immediate_data": true, 00:05:37.247 "allow_duplicated_isid": false, 00:05:37.247 "error_recovery_level": 0, 00:05:37.247 "nop_timeout": 60, 00:05:37.247 "nop_in_interval": 30, 00:05:37.247 "disable_chap": false, 00:05:37.247 "require_chap": false, 00:05:37.247 "mutual_chap": false, 00:05:37.247 "chap_group": 0, 00:05:37.247 "max_large_datain_per_connection": 64, 00:05:37.247 "max_r2t_per_connection": 4, 00:05:37.247 "pdu_pool_size": 36864, 00:05:37.247 "immediate_data_pool_size": 16384, 00:05:37.247 "data_out_pool_size": 2048 00:05:37.247 } 00:05:37.247 } 00:05:37.247 ] 00:05:37.247 } 00:05:37.247 ] 00:05:37.247 } 00:05:37.247 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:37.247 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 100619 00:05:37.247 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 100619 ']' 00:05:37.247 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 100619 00:05:37.247 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:37.247 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.247 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100619 00:05:37.247 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.247 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.247 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100619' 00:05:37.247 killing process with pid 100619 00:05:37.247 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 100619 00:05:37.247 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 100619 00:05:37.505 02:46:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=100753 00:05:37.505 02:46:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:37.505 02:46:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:42.774 02:46:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 100753 00:05:42.774 02:46:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 100753 ']' 00:05:42.774 02:46:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 100753 00:05:42.774 02:46:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:42.774 02:46:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.774 02:46:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100753 00:05:42.774 02:46:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.774 02:46:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.774 02:46:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100753' 00:05:42.774 killing process with pid 100753 00:05:42.774 02:46:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 100753 00:05:42.774 02:46:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 100753 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:43.033 00:05:43.033 real 0m6.430s 00:05:43.033 user 0m6.053s 00:05:43.033 sys 0m0.678s 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:43.033 ************************************ 00:05:43.033 END TEST skip_rpc_with_json 00:05:43.033 ************************************ 00:05:43.033 02:46:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:43.033 02:46:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.033 02:46:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.033 02:46:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.033 ************************************ 00:05:43.033 START TEST skip_rpc_with_delay 00:05:43.033 ************************************ 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.033 [2024-11-19 02:46:53.568295] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.033 00:05:43.033 real 0m0.074s 00:05:43.033 user 0m0.046s 00:05:43.033 sys 0m0.027s 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.033 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:43.033 ************************************ 00:05:43.033 END TEST skip_rpc_with_delay 00:05:43.033 ************************************ 00:05:43.033 02:46:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:43.033 02:46:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:43.033 02:46:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:43.033 02:46:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.033 02:46:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.033 02:46:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.033 ************************************ 00:05:43.033 START TEST exit_on_failed_rpc_init 00:05:43.033 ************************************ 00:05:43.033 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:43.033 02:46:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=101476 00:05:43.033 02:46:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.033 02:46:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 101476 00:05:43.033 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 101476 ']' 00:05:43.033 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.033 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.033 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.033 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.034 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.293 [2024-11-19 02:46:53.692786] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:43.293 [2024-11-19 02:46:53.692889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101476 ] 00:05:43.293 [2024-11-19 02:46:53.759180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.293 [2024-11-19 02:46:53.807564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.552 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.552 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:43.552 02:46:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.552 02:46:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.552 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:43.552 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.552 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.552 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.552 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.552 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.552 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.552 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.552 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.552 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:43.552 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.552 [2024-11-19 02:46:54.119383] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:43.552 [2024-11-19 02:46:54.119474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101487 ] 00:05:43.812 [2024-11-19 02:46:54.185196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.812 [2024-11-19 02:46:54.231923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.812 [2024-11-19 02:46:54.232031] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:43.812 [2024-11-19 02:46:54.232049] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:43.812 [2024-11-19 02:46:54.232060] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 101476 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 101476 ']' 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 101476 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101476 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101476' 00:05:43.812 killing process with pid 101476 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 101476 00:05:43.812 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 101476 00:05:44.380 00:05:44.380 real 0m1.065s 00:05:44.380 user 0m1.157s 00:05:44.380 sys 0m0.432s 00:05:44.380 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.380 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.380 ************************************ 00:05:44.380 END TEST exit_on_failed_rpc_init 00:05:44.380 ************************************ 00:05:44.380 02:46:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:44.380 00:05:44.380 real 0m13.349s 00:05:44.380 user 0m12.576s 00:05:44.380 sys 0m1.628s 00:05:44.380 02:46:54 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.380 02:46:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.380 ************************************ 00:05:44.380 END TEST skip_rpc 00:05:44.380 ************************************ 00:05:44.380 02:46:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:44.380 02:46:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.380 02:46:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.380 02:46:54 -- common/autotest_common.sh@10 -- # set +x 00:05:44.380 ************************************ 00:05:44.380 START TEST rpc_client 00:05:44.380 ************************************ 00:05:44.380 02:46:54 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:44.380 * Looking for test storage... 00:05:44.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:44.380 02:46:54 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.380 02:46:54 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.380 02:46:54 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.380 02:46:54 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.380 02:46:54 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:44.380 02:46:54 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.380 02:46:54 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.380 --rc genhtml_branch_coverage=1 00:05:44.380 --rc genhtml_function_coverage=1 00:05:44.380 --rc genhtml_legend=1 00:05:44.380 --rc geninfo_all_blocks=1 00:05:44.380 --rc geninfo_unexecuted_blocks=1 00:05:44.380 00:05:44.380 ' 00:05:44.380 02:46:54 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.380 --rc genhtml_branch_coverage=1 00:05:44.380 --rc genhtml_function_coverage=1 00:05:44.380 --rc genhtml_legend=1 00:05:44.380 --rc geninfo_all_blocks=1 00:05:44.380 --rc geninfo_unexecuted_blocks=1 00:05:44.380 00:05:44.380 ' 00:05:44.380 02:46:54 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.380 --rc genhtml_branch_coverage=1 00:05:44.380 --rc genhtml_function_coverage=1 00:05:44.380 --rc genhtml_legend=1 00:05:44.380 --rc geninfo_all_blocks=1 00:05:44.380 --rc geninfo_unexecuted_blocks=1 00:05:44.380 00:05:44.380 ' 00:05:44.380 02:46:54 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.380 --rc genhtml_branch_coverage=1 00:05:44.380 --rc genhtml_function_coverage=1 00:05:44.381 --rc genhtml_legend=1 00:05:44.381 --rc geninfo_all_blocks=1 00:05:44.381 --rc geninfo_unexecuted_blocks=1 00:05:44.381 00:05:44.381 ' 00:05:44.381 02:46:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:44.381 OK 00:05:44.381 02:46:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:44.381 00:05:44.381 real 0m0.162s 00:05:44.381 user 0m0.110s 00:05:44.381 sys 0m0.061s 00:05:44.381 02:46:54 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.381 02:46:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:44.381 ************************************ 00:05:44.381 END TEST rpc_client 00:05:44.381 ************************************ 00:05:44.381 02:46:54 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:44.381 02:46:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.381 02:46:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.381 02:46:54 -- common/autotest_common.sh@10 -- # set +x 00:05:44.381 ************************************ 00:05:44.381 START TEST json_config 00:05:44.381 ************************************ 00:05:44.381 02:46:54 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:44.641 02:46:55 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.641 02:46:55 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.641 02:46:55 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.641 02:46:55 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.641 02:46:55 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.641 02:46:55 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.641 02:46:55 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.641 02:46:55 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.641 02:46:55 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.641 02:46:55 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.641 02:46:55 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.641 02:46:55 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.641 02:46:55 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.641 02:46:55 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.641 02:46:55 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.641 02:46:55 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:44.641 02:46:55 json_config -- scripts/common.sh@345 -- # : 1 00:05:44.641 02:46:55 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.641 02:46:55 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.641 02:46:55 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:44.641 02:46:55 json_config -- scripts/common.sh@353 -- # local d=1 00:05:44.641 02:46:55 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.641 02:46:55 json_config -- scripts/common.sh@355 -- # echo 1 00:05:44.641 02:46:55 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.641 02:46:55 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:44.641 02:46:55 json_config -- scripts/common.sh@353 -- # local d=2 00:05:44.641 02:46:55 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.641 02:46:55 json_config -- scripts/common.sh@355 -- # echo 2 00:05:44.641 02:46:55 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.641 02:46:55 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.641 02:46:55 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.641 02:46:55 json_config -- scripts/common.sh@368 -- # return 0 00:05:44.641 02:46:55 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.641 02:46:55 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.641 --rc genhtml_branch_coverage=1 00:05:44.641 --rc genhtml_function_coverage=1 00:05:44.641 --rc genhtml_legend=1 00:05:44.641 --rc geninfo_all_blocks=1 00:05:44.641 --rc geninfo_unexecuted_blocks=1 00:05:44.641 00:05:44.641 ' 00:05:44.641 02:46:55 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.641 --rc genhtml_branch_coverage=1 00:05:44.641 --rc genhtml_function_coverage=1 00:05:44.641 --rc genhtml_legend=1 00:05:44.641 --rc geninfo_all_blocks=1 00:05:44.641 --rc geninfo_unexecuted_blocks=1 00:05:44.641 00:05:44.641 ' 00:05:44.641 02:46:55 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.641 --rc genhtml_branch_coverage=1 00:05:44.641 --rc genhtml_function_coverage=1 00:05:44.641 --rc genhtml_legend=1 00:05:44.641 --rc geninfo_all_blocks=1 00:05:44.641 --rc geninfo_unexecuted_blocks=1 00:05:44.641 00:05:44.641 ' 00:05:44.641 02:46:55 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.641 --rc genhtml_branch_coverage=1 00:05:44.641 --rc genhtml_function_coverage=1 00:05:44.641 --rc genhtml_legend=1 00:05:44.641 --rc geninfo_all_blocks=1 00:05:44.641 --rc geninfo_unexecuted_blocks=1 00:05:44.641 00:05:44.641 ' 00:05:44.641 02:46:55 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:44.641 02:46:55 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:44.641 02:46:55 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.641 02:46:55 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.641 02:46:55 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.641 02:46:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.641 02:46:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.641 02:46:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.641 02:46:55 json_config -- paths/export.sh@5 -- # export PATH 00:05:44.641 02:46:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@51 -- # : 0 00:05:44.641 02:46:55 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:44.642 02:46:55 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:44.642 02:46:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.642 02:46:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.642 02:46:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.642 02:46:55 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:44.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:44.642 02:46:55 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:44.642 02:46:55 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:44.642 02:46:55 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:44.642 INFO: JSON configuration test init 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:44.642 02:46:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:44.642 02:46:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:44.642 02:46:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:44.642 02:46:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.642 02:46:55 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:44.642 02:46:55 json_config -- json_config/common.sh@9 -- # local app=target 00:05:44.642 02:46:55 json_config -- json_config/common.sh@10 -- # shift 00:05:44.642 02:46:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:44.642 02:46:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:44.642 02:46:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:44.642 02:46:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.642 02:46:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.642 02:46:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=101745 00:05:44.642 02:46:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:44.642 02:46:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:44.642 Waiting for target to run... 00:05:44.642 02:46:55 json_config -- json_config/common.sh@25 -- # waitforlisten 101745 /var/tmp/spdk_tgt.sock 00:05:44.642 02:46:55 json_config -- common/autotest_common.sh@835 -- # '[' -z 101745 ']' 00:05:44.642 02:46:55 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:44.642 02:46:55 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.642 02:46:55 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:44.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:44.642 02:46:55 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.642 02:46:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.642 [2024-11-19 02:46:55.187783] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:44.642 [2024-11-19 02:46:55.187867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101745 ] 00:05:45.213 [2024-11-19 02:46:55.697230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.213 [2024-11-19 02:46:55.734735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.781 02:46:56 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.781 02:46:56 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:45.781 02:46:56 json_config -- json_config/common.sh@26 -- # echo '' 00:05:45.781 00:05:45.781 02:46:56 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:45.781 02:46:56 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:45.781 02:46:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:45.781 02:46:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.781 02:46:56 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:45.781 02:46:56 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:45.781 02:46:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:45.781 02:46:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.781 02:46:56 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:45.781 02:46:56 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:45.781 02:46:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:49.074 02:46:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:49.074 02:46:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:49.074 02:46:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@54 -- # sort 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:49.074 02:46:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:49.074 02:46:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:49.074 02:46:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:49.074 02:46:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:49.074 02:46:59 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:49.074 02:46:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:49.333 MallocForNvmf0 00:05:49.592 02:46:59 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:49.592 02:46:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:49.592 MallocForNvmf1 00:05:49.851 02:47:00 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:49.851 02:47:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:50.110 [2024-11-19 02:47:00.470491] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.110 02:47:00 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:50.110 02:47:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:50.368 02:47:00 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:50.368 02:47:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:50.627 02:47:01 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:50.627 02:47:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:50.884 02:47:01 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:50.884 02:47:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:51.143 [2024-11-19 02:47:01.529930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:51.143 02:47:01 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:51.143 02:47:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:51.143 02:47:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.143 02:47:01 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:51.143 02:47:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:51.143 02:47:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.143 02:47:01 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:51.143 02:47:01 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:51.143 02:47:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:51.402 MallocBdevForConfigChangeCheck 00:05:51.402 02:47:01 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:51.402 02:47:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:51.402 02:47:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.402 02:47:01 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:51.402 02:47:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:51.660 02:47:02 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:51.660 INFO: shutting down applications... 00:05:51.660 02:47:02 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:51.660 02:47:02 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:51.660 02:47:02 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:51.660 02:47:02 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:53.564 Calling clear_iscsi_subsystem 00:05:53.564 Calling clear_nvmf_subsystem 00:05:53.564 Calling clear_nbd_subsystem 00:05:53.564 Calling clear_ublk_subsystem 00:05:53.564 Calling clear_vhost_blk_subsystem 00:05:53.564 Calling clear_vhost_scsi_subsystem 00:05:53.564 Calling clear_bdev_subsystem 00:05:53.564 02:47:03 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:53.564 02:47:03 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:53.564 02:47:03 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:53.564 02:47:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:53.564 02:47:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:53.564 02:47:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:53.822 02:47:04 json_config -- json_config/json_config.sh@352 -- # break 00:05:53.822 02:47:04 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:53.822 02:47:04 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:53.822 02:47:04 json_config -- json_config/common.sh@31 -- # local app=target 00:05:53.822 02:47:04 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:53.822 02:47:04 json_config -- json_config/common.sh@35 -- # [[ -n 101745 ]] 00:05:53.822 02:47:04 json_config -- json_config/common.sh@38 -- # kill -SIGINT 101745 00:05:53.822 02:47:04 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:53.822 02:47:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.822 02:47:04 json_config -- json_config/common.sh@41 -- # kill -0 101745 00:05:53.822 02:47:04 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:54.399 02:47:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:54.399 02:47:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.399 02:47:04 json_config -- json_config/common.sh@41 -- # kill -0 101745 00:05:54.399 02:47:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:54.399 02:47:04 json_config -- json_config/common.sh@43 -- # break 00:05:54.399 02:47:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:54.399 02:47:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:54.399 SPDK target shutdown done 00:05:54.399 02:47:04 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:54.399 INFO: relaunching applications... 00:05:54.399 02:47:04 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.399 02:47:04 json_config -- json_config/common.sh@9 -- # local app=target 00:05:54.399 02:47:04 json_config -- json_config/common.sh@10 -- # shift 00:05:54.399 02:47:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:54.399 02:47:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:54.399 02:47:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:54.399 02:47:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.399 02:47:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.399 02:47:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=103065 00:05:54.399 02:47:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.399 02:47:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:54.400 Waiting for target to run... 00:05:54.400 02:47:04 json_config -- json_config/common.sh@25 -- # waitforlisten 103065 /var/tmp/spdk_tgt.sock 00:05:54.400 02:47:04 json_config -- common/autotest_common.sh@835 -- # '[' -z 103065 ']' 00:05:54.400 02:47:04 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:54.400 02:47:04 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.400 02:47:04 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:54.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:54.400 02:47:04 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.400 02:47:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.400 [2024-11-19 02:47:04.906831] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:54.400 [2024-11-19 02:47:04.906923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103065 ] 00:05:54.659 [2024-11-19 02:47:05.240318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.659 [2024-11-19 02:47:05.274762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.948 [2024-11-19 02:47:08.311831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:57.948 [2024-11-19 02:47:08.344269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:57.948 02:47:08 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.948 02:47:08 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:57.948 02:47:08 json_config -- json_config/common.sh@26 -- # echo '' 00:05:57.948 00:05:57.948 02:47:08 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:57.948 02:47:08 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:57.948 INFO: Checking if target configuration is the same... 00:05:57.948 02:47:08 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:57.948 02:47:08 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:57.948 02:47:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:57.948 + '[' 2 -ne 2 ']' 00:05:57.948 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:57.948 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:57.948 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:57.948 +++ basename /dev/fd/62 00:05:57.948 ++ mktemp /tmp/62.XXX 00:05:57.948 + tmp_file_1=/tmp/62.izF 00:05:57.948 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:57.948 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:57.948 + tmp_file_2=/tmp/spdk_tgt_config.json.hlO 00:05:57.948 + ret=0 00:05:57.948 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:58.208 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:58.467 + diff -u /tmp/62.izF /tmp/spdk_tgt_config.json.hlO 00:05:58.467 + echo 'INFO: JSON config files are the same' 00:05:58.467 INFO: JSON config files are the same 00:05:58.467 + rm /tmp/62.izF /tmp/spdk_tgt_config.json.hlO 00:05:58.467 + exit 0 00:05:58.467 02:47:08 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:58.467 02:47:08 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:58.467 INFO: changing configuration and checking if this can be detected... 00:05:58.467 02:47:08 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:58.467 02:47:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:58.727 02:47:09 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:58.727 02:47:09 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:58.727 02:47:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:58.727 + '[' 2 -ne 2 ']' 00:05:58.727 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:58.727 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:58.727 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:58.727 +++ basename /dev/fd/62 00:05:58.727 ++ mktemp /tmp/62.XXX 00:05:58.727 + tmp_file_1=/tmp/62.jLF 00:05:58.727 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:58.727 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:58.727 + tmp_file_2=/tmp/spdk_tgt_config.json.Fap 00:05:58.727 + ret=0 00:05:58.727 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:58.987 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:58.987 + diff -u /tmp/62.jLF /tmp/spdk_tgt_config.json.Fap 00:05:58.987 + ret=1 00:05:58.987 + echo '=== Start of file: /tmp/62.jLF ===' 00:05:58.987 + cat /tmp/62.jLF 00:05:58.987 + echo '=== End of file: /tmp/62.jLF ===' 00:05:58.987 + echo '' 00:05:58.987 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Fap ===' 00:05:58.987 + cat /tmp/spdk_tgt_config.json.Fap 00:05:58.987 + echo '=== End of file: /tmp/spdk_tgt_config.json.Fap ===' 00:05:58.987 + echo '' 00:05:58.987 + rm /tmp/62.jLF /tmp/spdk_tgt_config.json.Fap 00:05:58.987 + exit 1 00:05:58.987 02:47:09 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:58.987 INFO: configuration change detected. 00:05:58.987 02:47:09 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:58.987 02:47:09 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:58.987 02:47:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:58.987 02:47:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.987 02:47:09 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:58.987 02:47:09 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:58.987 02:47:09 json_config -- json_config/json_config.sh@324 -- # [[ -n 103065 ]] 00:05:58.987 02:47:09 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:58.987 02:47:09 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:58.987 02:47:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:58.987 02:47:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.987 02:47:09 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:58.987 02:47:09 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:58.987 02:47:09 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:58.987 02:47:09 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:58.987 02:47:09 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:58.987 02:47:09 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:58.987 02:47:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:58.987 02:47:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.987 02:47:09 json_config -- json_config/json_config.sh@330 -- # killprocess 103065 00:05:58.987 02:47:09 json_config -- common/autotest_common.sh@954 -- # '[' -z 103065 ']' 00:05:58.987 02:47:09 json_config -- common/autotest_common.sh@958 -- # kill -0 103065 00:05:58.987 02:47:09 json_config -- common/autotest_common.sh@959 -- # uname 00:05:58.987 02:47:09 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.987 02:47:09 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103065 00:05:59.246 02:47:09 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.246 02:47:09 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.246 02:47:09 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103065' 00:05:59.246 killing process with pid 103065 00:05:59.246 02:47:09 json_config -- common/autotest_common.sh@973 -- # kill 103065 00:05:59.246 02:47:09 json_config -- common/autotest_common.sh@978 -- # wait 103065 00:06:00.623 02:47:11 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:00.623 02:47:11 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:00.623 02:47:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:00.623 02:47:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.623 02:47:11 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:00.623 02:47:11 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:00.623 INFO: Success 00:06:00.623 00:06:00.623 real 0m16.213s 00:06:00.623 user 0m18.291s 00:06:00.623 sys 0m2.054s 00:06:00.623 02:47:11 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.623 02:47:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.623 ************************************ 00:06:00.623 END TEST json_config 00:06:00.623 ************************************ 00:06:00.623 02:47:11 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:00.623 02:47:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.623 02:47:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.623 02:47:11 -- common/autotest_common.sh@10 -- # set +x 00:06:00.882 ************************************ 00:06:00.882 START TEST json_config_extra_key 00:06:00.882 ************************************ 00:06:00.882 02:47:11 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:00.882 02:47:11 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.882 02:47:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.882 02:47:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.882 02:47:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:00.882 02:47:11 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.882 02:47:11 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.882 --rc genhtml_branch_coverage=1 00:06:00.882 --rc genhtml_function_coverage=1 00:06:00.882 --rc genhtml_legend=1 00:06:00.882 --rc geninfo_all_blocks=1 00:06:00.882 --rc geninfo_unexecuted_blocks=1 00:06:00.882 00:06:00.882 ' 00:06:00.882 02:47:11 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.882 --rc genhtml_branch_coverage=1 00:06:00.882 --rc genhtml_function_coverage=1 00:06:00.882 --rc genhtml_legend=1 00:06:00.882 --rc geninfo_all_blocks=1 00:06:00.882 --rc geninfo_unexecuted_blocks=1 00:06:00.882 00:06:00.882 ' 00:06:00.882 02:47:11 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.882 --rc genhtml_branch_coverage=1 00:06:00.882 --rc genhtml_function_coverage=1 00:06:00.882 --rc genhtml_legend=1 00:06:00.882 --rc geninfo_all_blocks=1 00:06:00.882 --rc geninfo_unexecuted_blocks=1 00:06:00.882 00:06:00.882 ' 00:06:00.882 02:47:11 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.882 --rc genhtml_branch_coverage=1 00:06:00.882 --rc genhtml_function_coverage=1 00:06:00.882 --rc genhtml_legend=1 00:06:00.882 --rc geninfo_all_blocks=1 00:06:00.882 --rc geninfo_unexecuted_blocks=1 00:06:00.882 00:06:00.882 ' 00:06:00.882 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.882 02:47:11 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.882 02:47:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.882 02:47:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.882 02:47:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.882 02:47:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:00.882 02:47:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:00.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:00.882 02:47:11 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:00.882 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:00.883 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:00.883 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:00.883 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:00.883 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:00.883 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:00.883 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:00.883 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:00.883 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:00.883 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:00.883 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:00.883 INFO: launching applications... 00:06:00.883 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:00.883 02:47:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:00.883 02:47:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:00.883 02:47:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:00.883 02:47:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:00.883 02:47:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:00.883 02:47:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:00.883 02:47:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:00.883 02:47:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=103980 00:06:00.883 02:47:11 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:00.883 02:47:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:00.883 Waiting for target to run... 00:06:00.883 02:47:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 103980 /var/tmp/spdk_tgt.sock 00:06:00.883 02:47:11 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 103980 ']' 00:06:00.883 02:47:11 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:00.883 02:47:11 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.883 02:47:11 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:00.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:00.883 02:47:11 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.883 02:47:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:00.883 [2024-11-19 02:47:11.441929] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:00.883 [2024-11-19 02:47:11.442027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103980 ] 00:06:01.453 [2024-11-19 02:47:11.940359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.453 [2024-11-19 02:47:11.981551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.021 02:47:12 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.021 02:47:12 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:02.021 02:47:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:02.021 00:06:02.021 02:47:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:02.021 INFO: shutting down applications... 00:06:02.021 02:47:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:02.021 02:47:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:02.021 02:47:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:02.021 02:47:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 103980 ]] 00:06:02.021 02:47:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 103980 00:06:02.021 02:47:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:02.021 02:47:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.021 02:47:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 103980 00:06:02.021 02:47:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:02.592 02:47:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:02.592 02:47:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.592 02:47:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 103980 00:06:02.592 02:47:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:02.592 02:47:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:02.592 02:47:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:02.592 02:47:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:02.592 SPDK target shutdown done 00:06:02.592 02:47:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:02.592 Success 00:06:02.592 00:06:02.592 real 0m1.674s 00:06:02.592 user 0m1.477s 00:06:02.592 sys 0m0.624s 00:06:02.592 02:47:12 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.592 02:47:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:02.592 ************************************ 00:06:02.592 END TEST json_config_extra_key 00:06:02.592 ************************************ 00:06:02.592 02:47:12 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:02.592 02:47:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.592 02:47:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.592 02:47:12 -- common/autotest_common.sh@10 -- # set +x 00:06:02.592 ************************************ 00:06:02.592 START TEST alias_rpc 00:06:02.592 ************************************ 00:06:02.592 02:47:12 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:02.592 * Looking for test storage... 00:06:02.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:02.592 02:47:13 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.592 02:47:13 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.592 02:47:13 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.592 02:47:13 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:02.592 02:47:13 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:02.593 02:47:13 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.593 02:47:13 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:02.593 02:47:13 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.593 02:47:13 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.593 02:47:13 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.593 02:47:13 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:02.593 02:47:13 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.593 02:47:13 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.593 --rc genhtml_branch_coverage=1 00:06:02.593 --rc genhtml_function_coverage=1 00:06:02.593 --rc genhtml_legend=1 00:06:02.593 --rc geninfo_all_blocks=1 00:06:02.593 --rc geninfo_unexecuted_blocks=1 00:06:02.593 00:06:02.593 ' 00:06:02.593 02:47:13 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.593 --rc genhtml_branch_coverage=1 00:06:02.593 --rc genhtml_function_coverage=1 00:06:02.593 --rc genhtml_legend=1 00:06:02.593 --rc geninfo_all_blocks=1 00:06:02.593 --rc geninfo_unexecuted_blocks=1 00:06:02.593 00:06:02.593 ' 00:06:02.593 02:47:13 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.593 --rc genhtml_branch_coverage=1 00:06:02.593 --rc genhtml_function_coverage=1 00:06:02.593 --rc genhtml_legend=1 00:06:02.593 --rc geninfo_all_blocks=1 00:06:02.593 --rc geninfo_unexecuted_blocks=1 00:06:02.593 00:06:02.593 ' 00:06:02.593 02:47:13 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.593 --rc genhtml_branch_coverage=1 00:06:02.593 --rc genhtml_function_coverage=1 00:06:02.593 --rc genhtml_legend=1 00:06:02.593 --rc geninfo_all_blocks=1 00:06:02.593 --rc geninfo_unexecuted_blocks=1 00:06:02.593 00:06:02.593 ' 00:06:02.593 02:47:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:02.593 02:47:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=104181 00:06:02.593 02:47:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.593 02:47:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 104181 00:06:02.593 02:47:13 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 104181 ']' 00:06:02.593 02:47:13 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.593 02:47:13 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.593 02:47:13 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.593 02:47:13 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.593 02:47:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.593 [2024-11-19 02:47:13.183201] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:02.593 [2024-11-19 02:47:13.183284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104181 ] 00:06:02.852 [2024-11-19 02:47:13.271568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.852 [2024-11-19 02:47:13.327998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.111 02:47:13 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.111 02:47:13 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:03.111 02:47:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:03.370 02:47:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 104181 00:06:03.370 02:47:13 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 104181 ']' 00:06:03.370 02:47:13 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 104181 00:06:03.370 02:47:13 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:03.370 02:47:13 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.370 02:47:13 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104181 00:06:03.370 02:47:13 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.370 02:47:13 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.370 02:47:13 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104181' 00:06:03.370 killing process with pid 104181 00:06:03.370 02:47:13 alias_rpc -- common/autotest_common.sh@973 -- # kill 104181 00:06:03.370 02:47:13 alias_rpc -- common/autotest_common.sh@978 -- # wait 104181 00:06:03.937 00:06:03.937 real 0m1.326s 00:06:03.937 user 0m1.545s 00:06:03.937 sys 0m0.484s 00:06:03.937 02:47:14 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.937 02:47:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.937 ************************************ 00:06:03.937 END TEST alias_rpc 00:06:03.937 ************************************ 00:06:03.937 02:47:14 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:03.937 02:47:14 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:03.937 02:47:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.937 02:47:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.937 02:47:14 -- common/autotest_common.sh@10 -- # set +x 00:06:03.937 ************************************ 00:06:03.937 START TEST spdkcli_tcp 00:06:03.937 ************************************ 00:06:03.937 02:47:14 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:03.937 * Looking for test storage... 00:06:03.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:03.937 02:47:14 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.937 02:47:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.937 02:47:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.937 02:47:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.937 02:47:14 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.937 02:47:14 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.937 02:47:14 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.937 02:47:14 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.937 02:47:14 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.937 02:47:14 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.937 02:47:14 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.937 02:47:14 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.937 02:47:14 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.937 02:47:14 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.937 02:47:14 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.937 02:47:14 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:03.937 02:47:14 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:03.937 02:47:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.938 02:47:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.938 02:47:14 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:03.938 02:47:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:03.938 02:47:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.938 02:47:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:03.938 02:47:14 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.938 02:47:14 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:03.938 02:47:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:03.938 02:47:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.938 02:47:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:03.938 02:47:14 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.938 02:47:14 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.938 02:47:14 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.938 02:47:14 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:03.938 02:47:14 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.938 02:47:14 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.938 --rc genhtml_branch_coverage=1 00:06:03.938 --rc genhtml_function_coverage=1 00:06:03.938 --rc genhtml_legend=1 00:06:03.938 --rc geninfo_all_blocks=1 00:06:03.938 --rc geninfo_unexecuted_blocks=1 00:06:03.938 00:06:03.938 ' 00:06:03.938 02:47:14 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.938 --rc genhtml_branch_coverage=1 00:06:03.938 --rc genhtml_function_coverage=1 00:06:03.938 --rc genhtml_legend=1 00:06:03.938 --rc geninfo_all_blocks=1 00:06:03.938 --rc geninfo_unexecuted_blocks=1 00:06:03.938 00:06:03.938 ' 00:06:03.938 02:47:14 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.938 --rc genhtml_branch_coverage=1 00:06:03.938 --rc genhtml_function_coverage=1 00:06:03.938 --rc genhtml_legend=1 00:06:03.938 --rc geninfo_all_blocks=1 00:06:03.938 --rc geninfo_unexecuted_blocks=1 00:06:03.938 00:06:03.938 ' 00:06:03.938 02:47:14 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.938 --rc genhtml_branch_coverage=1 00:06:03.938 --rc genhtml_function_coverage=1 00:06:03.938 --rc genhtml_legend=1 00:06:03.938 --rc geninfo_all_blocks=1 00:06:03.938 --rc geninfo_unexecuted_blocks=1 00:06:03.938 00:06:03.938 ' 00:06:03.938 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:03.938 02:47:14 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:03.938 02:47:14 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:03.938 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:03.938 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:03.938 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:03.938 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:03.938 02:47:14 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.938 02:47:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.938 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=104448 00:06:03.938 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:03.938 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 104448 00:06:03.938 02:47:14 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 104448 ']' 00:06:03.938 02:47:14 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.938 02:47:14 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.938 02:47:14 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.938 02:47:14 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.938 02:47:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.938 [2024-11-19 02:47:14.552069] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:03.938 [2024-11-19 02:47:14.552174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104448 ] 00:06:04.197 [2024-11-19 02:47:14.618389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.197 [2024-11-19 02:47:14.664799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.197 [2024-11-19 02:47:14.664803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.456 02:47:14 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.456 02:47:14 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:04.456 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=104507 00:06:04.456 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:04.456 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:04.716 [ 00:06:04.716 "bdev_malloc_delete", 00:06:04.716 "bdev_malloc_create", 00:06:04.716 "bdev_null_resize", 00:06:04.716 "bdev_null_delete", 00:06:04.716 "bdev_null_create", 00:06:04.716 "bdev_nvme_cuse_unregister", 00:06:04.716 "bdev_nvme_cuse_register", 00:06:04.716 "bdev_opal_new_user", 00:06:04.716 "bdev_opal_set_lock_state", 00:06:04.716 "bdev_opal_delete", 00:06:04.716 "bdev_opal_get_info", 00:06:04.716 "bdev_opal_create", 00:06:04.716 "bdev_nvme_opal_revert", 00:06:04.716 "bdev_nvme_opal_init", 00:06:04.716 "bdev_nvme_send_cmd", 00:06:04.716 "bdev_nvme_set_keys", 00:06:04.716 "bdev_nvme_get_path_iostat", 00:06:04.716 "bdev_nvme_get_mdns_discovery_info", 00:06:04.716 "bdev_nvme_stop_mdns_discovery", 00:06:04.716 "bdev_nvme_start_mdns_discovery", 00:06:04.716 "bdev_nvme_set_multipath_policy", 00:06:04.716 "bdev_nvme_set_preferred_path", 00:06:04.716 "bdev_nvme_get_io_paths", 00:06:04.716 "bdev_nvme_remove_error_injection", 00:06:04.716 "bdev_nvme_add_error_injection", 00:06:04.716 "bdev_nvme_get_discovery_info", 00:06:04.716 "bdev_nvme_stop_discovery", 00:06:04.716 "bdev_nvme_start_discovery", 00:06:04.716 "bdev_nvme_get_controller_health_info", 00:06:04.716 "bdev_nvme_disable_controller", 00:06:04.716 "bdev_nvme_enable_controller", 00:06:04.716 "bdev_nvme_reset_controller", 00:06:04.716 "bdev_nvme_get_transport_statistics", 00:06:04.716 "bdev_nvme_apply_firmware", 00:06:04.716 "bdev_nvme_detach_controller", 00:06:04.716 "bdev_nvme_get_controllers", 00:06:04.716 "bdev_nvme_attach_controller", 00:06:04.716 "bdev_nvme_set_hotplug", 00:06:04.716 "bdev_nvme_set_options", 00:06:04.716 "bdev_passthru_delete", 00:06:04.716 "bdev_passthru_create", 00:06:04.716 "bdev_lvol_set_parent_bdev", 00:06:04.716 "bdev_lvol_set_parent", 00:06:04.716 "bdev_lvol_check_shallow_copy", 00:06:04.716 "bdev_lvol_start_shallow_copy", 00:06:04.716 "bdev_lvol_grow_lvstore", 00:06:04.716 "bdev_lvol_get_lvols", 00:06:04.716 "bdev_lvol_get_lvstores", 00:06:04.716 "bdev_lvol_delete", 00:06:04.716 "bdev_lvol_set_read_only", 00:06:04.716 "bdev_lvol_resize", 00:06:04.716 "bdev_lvol_decouple_parent", 00:06:04.716 "bdev_lvol_inflate", 00:06:04.716 "bdev_lvol_rename", 00:06:04.716 "bdev_lvol_clone_bdev", 00:06:04.716 "bdev_lvol_clone", 00:06:04.716 "bdev_lvol_snapshot", 00:06:04.716 "bdev_lvol_create", 00:06:04.716 "bdev_lvol_delete_lvstore", 00:06:04.716 "bdev_lvol_rename_lvstore", 00:06:04.716 "bdev_lvol_create_lvstore", 00:06:04.716 "bdev_raid_set_options", 00:06:04.716 "bdev_raid_remove_base_bdev", 00:06:04.716 "bdev_raid_add_base_bdev", 00:06:04.716 "bdev_raid_delete", 00:06:04.716 "bdev_raid_create", 00:06:04.716 "bdev_raid_get_bdevs", 00:06:04.716 "bdev_error_inject_error", 00:06:04.716 "bdev_error_delete", 00:06:04.716 "bdev_error_create", 00:06:04.716 "bdev_split_delete", 00:06:04.716 "bdev_split_create", 00:06:04.716 "bdev_delay_delete", 00:06:04.716 "bdev_delay_create", 00:06:04.716 "bdev_delay_update_latency", 00:06:04.716 "bdev_zone_block_delete", 00:06:04.716 "bdev_zone_block_create", 00:06:04.716 "blobfs_create", 00:06:04.716 "blobfs_detect", 00:06:04.716 "blobfs_set_cache_size", 00:06:04.716 "bdev_aio_delete", 00:06:04.716 "bdev_aio_rescan", 00:06:04.716 "bdev_aio_create", 00:06:04.716 "bdev_ftl_set_property", 00:06:04.716 "bdev_ftl_get_properties", 00:06:04.716 "bdev_ftl_get_stats", 00:06:04.716 "bdev_ftl_unmap", 00:06:04.716 "bdev_ftl_unload", 00:06:04.716 "bdev_ftl_delete", 00:06:04.716 "bdev_ftl_load", 00:06:04.716 "bdev_ftl_create", 00:06:04.716 "bdev_virtio_attach_controller", 00:06:04.716 "bdev_virtio_scsi_get_devices", 00:06:04.716 "bdev_virtio_detach_controller", 00:06:04.716 "bdev_virtio_blk_set_hotplug", 00:06:04.716 "bdev_iscsi_delete", 00:06:04.716 "bdev_iscsi_create", 00:06:04.716 "bdev_iscsi_set_options", 00:06:04.716 "accel_error_inject_error", 00:06:04.716 "ioat_scan_accel_module", 00:06:04.716 "dsa_scan_accel_module", 00:06:04.716 "iaa_scan_accel_module", 00:06:04.716 "vfu_virtio_create_fs_endpoint", 00:06:04.716 "vfu_virtio_create_scsi_endpoint", 00:06:04.716 "vfu_virtio_scsi_remove_target", 00:06:04.716 "vfu_virtio_scsi_add_target", 00:06:04.716 "vfu_virtio_create_blk_endpoint", 00:06:04.716 "vfu_virtio_delete_endpoint", 00:06:04.716 "keyring_file_remove_key", 00:06:04.716 "keyring_file_add_key", 00:06:04.716 "keyring_linux_set_options", 00:06:04.716 "fsdev_aio_delete", 00:06:04.716 "fsdev_aio_create", 00:06:04.716 "iscsi_get_histogram", 00:06:04.716 "iscsi_enable_histogram", 00:06:04.716 "iscsi_set_options", 00:06:04.716 "iscsi_get_auth_groups", 00:06:04.716 "iscsi_auth_group_remove_secret", 00:06:04.716 "iscsi_auth_group_add_secret", 00:06:04.716 "iscsi_delete_auth_group", 00:06:04.716 "iscsi_create_auth_group", 00:06:04.716 "iscsi_set_discovery_auth", 00:06:04.716 "iscsi_get_options", 00:06:04.716 "iscsi_target_node_request_logout", 00:06:04.716 "iscsi_target_node_set_redirect", 00:06:04.716 "iscsi_target_node_set_auth", 00:06:04.716 "iscsi_target_node_add_lun", 00:06:04.716 "iscsi_get_stats", 00:06:04.716 "iscsi_get_connections", 00:06:04.716 "iscsi_portal_group_set_auth", 00:06:04.716 "iscsi_start_portal_group", 00:06:04.716 "iscsi_delete_portal_group", 00:06:04.716 "iscsi_create_portal_group", 00:06:04.716 "iscsi_get_portal_groups", 00:06:04.716 "iscsi_delete_target_node", 00:06:04.716 "iscsi_target_node_remove_pg_ig_maps", 00:06:04.716 "iscsi_target_node_add_pg_ig_maps", 00:06:04.716 "iscsi_create_target_node", 00:06:04.716 "iscsi_get_target_nodes", 00:06:04.716 "iscsi_delete_initiator_group", 00:06:04.716 "iscsi_initiator_group_remove_initiators", 00:06:04.716 "iscsi_initiator_group_add_initiators", 00:06:04.716 "iscsi_create_initiator_group", 00:06:04.716 "iscsi_get_initiator_groups", 00:06:04.716 "nvmf_set_crdt", 00:06:04.716 "nvmf_set_config", 00:06:04.716 "nvmf_set_max_subsystems", 00:06:04.716 "nvmf_stop_mdns_prr", 00:06:04.716 "nvmf_publish_mdns_prr", 00:06:04.716 "nvmf_subsystem_get_listeners", 00:06:04.716 "nvmf_subsystem_get_qpairs", 00:06:04.716 "nvmf_subsystem_get_controllers", 00:06:04.716 "nvmf_get_stats", 00:06:04.716 "nvmf_get_transports", 00:06:04.716 "nvmf_create_transport", 00:06:04.716 "nvmf_get_targets", 00:06:04.716 "nvmf_delete_target", 00:06:04.716 "nvmf_create_target", 00:06:04.716 "nvmf_subsystem_allow_any_host", 00:06:04.716 "nvmf_subsystem_set_keys", 00:06:04.716 "nvmf_subsystem_remove_host", 00:06:04.716 "nvmf_subsystem_add_host", 00:06:04.716 "nvmf_ns_remove_host", 00:06:04.716 "nvmf_ns_add_host", 00:06:04.716 "nvmf_subsystem_remove_ns", 00:06:04.716 "nvmf_subsystem_set_ns_ana_group", 00:06:04.716 "nvmf_subsystem_add_ns", 00:06:04.716 "nvmf_subsystem_listener_set_ana_state", 00:06:04.716 "nvmf_discovery_get_referrals", 00:06:04.716 "nvmf_discovery_remove_referral", 00:06:04.716 "nvmf_discovery_add_referral", 00:06:04.717 "nvmf_subsystem_remove_listener", 00:06:04.717 "nvmf_subsystem_add_listener", 00:06:04.717 "nvmf_delete_subsystem", 00:06:04.717 "nvmf_create_subsystem", 00:06:04.717 "nvmf_get_subsystems", 00:06:04.717 "env_dpdk_get_mem_stats", 00:06:04.717 "nbd_get_disks", 00:06:04.717 "nbd_stop_disk", 00:06:04.717 "nbd_start_disk", 00:06:04.717 "ublk_recover_disk", 00:06:04.717 "ublk_get_disks", 00:06:04.717 "ublk_stop_disk", 00:06:04.717 "ublk_start_disk", 00:06:04.717 "ublk_destroy_target", 00:06:04.717 "ublk_create_target", 00:06:04.717 "virtio_blk_create_transport", 00:06:04.717 "virtio_blk_get_transports", 00:06:04.717 "vhost_controller_set_coalescing", 00:06:04.717 "vhost_get_controllers", 00:06:04.717 "vhost_delete_controller", 00:06:04.717 "vhost_create_blk_controller", 00:06:04.717 "vhost_scsi_controller_remove_target", 00:06:04.717 "vhost_scsi_controller_add_target", 00:06:04.717 "vhost_start_scsi_controller", 00:06:04.717 "vhost_create_scsi_controller", 00:06:04.717 "thread_set_cpumask", 00:06:04.717 "scheduler_set_options", 00:06:04.717 "framework_get_governor", 00:06:04.717 "framework_get_scheduler", 00:06:04.717 "framework_set_scheduler", 00:06:04.717 "framework_get_reactors", 00:06:04.717 "thread_get_io_channels", 00:06:04.717 "thread_get_pollers", 00:06:04.717 "thread_get_stats", 00:06:04.717 "framework_monitor_context_switch", 00:06:04.717 "spdk_kill_instance", 00:06:04.717 "log_enable_timestamps", 00:06:04.717 "log_get_flags", 00:06:04.717 "log_clear_flag", 00:06:04.717 "log_set_flag", 00:06:04.717 "log_get_level", 00:06:04.717 "log_set_level", 00:06:04.717 "log_get_print_level", 00:06:04.717 "log_set_print_level", 00:06:04.717 "framework_enable_cpumask_locks", 00:06:04.717 "framework_disable_cpumask_locks", 00:06:04.717 "framework_wait_init", 00:06:04.717 "framework_start_init", 00:06:04.717 "scsi_get_devices", 00:06:04.717 "bdev_get_histogram", 00:06:04.717 "bdev_enable_histogram", 00:06:04.717 "bdev_set_qos_limit", 00:06:04.717 "bdev_set_qd_sampling_period", 00:06:04.717 "bdev_get_bdevs", 00:06:04.717 "bdev_reset_iostat", 00:06:04.717 "bdev_get_iostat", 00:06:04.717 "bdev_examine", 00:06:04.717 "bdev_wait_for_examine", 00:06:04.717 "bdev_set_options", 00:06:04.717 "accel_get_stats", 00:06:04.717 "accel_set_options", 00:06:04.717 "accel_set_driver", 00:06:04.717 "accel_crypto_key_destroy", 00:06:04.717 "accel_crypto_keys_get", 00:06:04.717 "accel_crypto_key_create", 00:06:04.717 "accel_assign_opc", 00:06:04.717 "accel_get_module_info", 00:06:04.717 "accel_get_opc_assignments", 00:06:04.717 "vmd_rescan", 00:06:04.717 "vmd_remove_device", 00:06:04.717 "vmd_enable", 00:06:04.717 "sock_get_default_impl", 00:06:04.717 "sock_set_default_impl", 00:06:04.717 "sock_impl_set_options", 00:06:04.717 "sock_impl_get_options", 00:06:04.717 "iobuf_get_stats", 00:06:04.717 "iobuf_set_options", 00:06:04.717 "keyring_get_keys", 00:06:04.717 "vfu_tgt_set_base_path", 00:06:04.717 "framework_get_pci_devices", 00:06:04.717 "framework_get_config", 00:06:04.717 "framework_get_subsystems", 00:06:04.717 "fsdev_set_opts", 00:06:04.717 "fsdev_get_opts", 00:06:04.717 "trace_get_info", 00:06:04.717 "trace_get_tpoint_group_mask", 00:06:04.717 "trace_disable_tpoint_group", 00:06:04.717 "trace_enable_tpoint_group", 00:06:04.717 "trace_clear_tpoint_mask", 00:06:04.717 "trace_set_tpoint_mask", 00:06:04.717 "notify_get_notifications", 00:06:04.717 "notify_get_types", 00:06:04.717 "spdk_get_version", 00:06:04.717 "rpc_get_methods" 00:06:04.717 ] 00:06:04.717 02:47:15 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:04.717 02:47:15 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:04.717 02:47:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.717 02:47:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:04.717 02:47:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 104448 00:06:04.717 02:47:15 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 104448 ']' 00:06:04.717 02:47:15 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 104448 00:06:04.717 02:47:15 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:04.717 02:47:15 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.717 02:47:15 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104448 00:06:04.717 02:47:15 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.717 02:47:15 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.717 02:47:15 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104448' 00:06:04.717 killing process with pid 104448 00:06:04.717 02:47:15 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 104448 00:06:04.717 02:47:15 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 104448 00:06:05.284 00:06:05.284 real 0m1.264s 00:06:05.284 user 0m2.281s 00:06:05.284 sys 0m0.471s 00:06:05.284 02:47:15 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.284 02:47:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.284 ************************************ 00:06:05.284 END TEST spdkcli_tcp 00:06:05.284 ************************************ 00:06:05.284 02:47:15 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:05.284 02:47:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.284 02:47:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.284 02:47:15 -- common/autotest_common.sh@10 -- # set +x 00:06:05.284 ************************************ 00:06:05.284 START TEST dpdk_mem_utility 00:06:05.284 ************************************ 00:06:05.284 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:05.284 * Looking for test storage... 00:06:05.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:05.284 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:05.284 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:05.284 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.284 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.284 02:47:15 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:05.284 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.284 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:05.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.284 --rc genhtml_branch_coverage=1 00:06:05.284 --rc genhtml_function_coverage=1 00:06:05.284 --rc genhtml_legend=1 00:06:05.284 --rc geninfo_all_blocks=1 00:06:05.284 --rc geninfo_unexecuted_blocks=1 00:06:05.284 00:06:05.284 ' 00:06:05.284 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:05.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.284 --rc genhtml_branch_coverage=1 00:06:05.284 --rc genhtml_function_coverage=1 00:06:05.284 --rc genhtml_legend=1 00:06:05.284 --rc geninfo_all_blocks=1 00:06:05.284 --rc geninfo_unexecuted_blocks=1 00:06:05.284 00:06:05.284 ' 00:06:05.284 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:05.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.284 --rc genhtml_branch_coverage=1 00:06:05.284 --rc genhtml_function_coverage=1 00:06:05.284 --rc genhtml_legend=1 00:06:05.284 --rc geninfo_all_blocks=1 00:06:05.284 --rc geninfo_unexecuted_blocks=1 00:06:05.284 00:06:05.284 ' 00:06:05.284 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:05.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.284 --rc genhtml_branch_coverage=1 00:06:05.284 --rc genhtml_function_coverage=1 00:06:05.284 --rc genhtml_legend=1 00:06:05.284 --rc geninfo_all_blocks=1 00:06:05.284 --rc geninfo_unexecuted_blocks=1 00:06:05.284 00:06:05.284 ' 00:06:05.284 02:47:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:05.284 02:47:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=104711 00:06:05.284 02:47:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.284 02:47:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 104711 00:06:05.284 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 104711 ']' 00:06:05.284 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.284 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.284 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.284 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.284 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.284 [2024-11-19 02:47:15.860401] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:05.284 [2024-11-19 02:47:15.860489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104711 ] 00:06:05.543 [2024-11-19 02:47:15.926437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.543 [2024-11-19 02:47:15.971998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.803 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.803 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:05.803 02:47:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:05.803 02:47:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:05.803 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.803 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.803 { 00:06:05.803 "filename": "/tmp/spdk_mem_dump.txt" 00:06:05.803 } 00:06:05.803 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.803 02:47:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:05.803 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:05.803 1 heaps totaling size 810.000000 MiB 00:06:05.803 size: 810.000000 MiB heap id: 0 00:06:05.803 end heaps---------- 00:06:05.803 9 mempools totaling size 595.772034 MiB 00:06:05.803 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:05.803 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:05.803 size: 92.545471 MiB name: bdev_io_104711 00:06:05.803 size: 50.003479 MiB name: msgpool_104711 00:06:05.803 size: 36.509338 MiB name: fsdev_io_104711 00:06:05.803 size: 21.763794 MiB name: PDU_Pool 00:06:05.803 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:05.803 size: 4.133484 MiB name: evtpool_104711 00:06:05.803 size: 0.026123 MiB name: Session_Pool 00:06:05.803 end mempools------- 00:06:05.803 6 memzones totaling size 4.142822 MiB 00:06:05.803 size: 1.000366 MiB name: RG_ring_0_104711 00:06:05.803 size: 1.000366 MiB name: RG_ring_1_104711 00:06:05.803 size: 1.000366 MiB name: RG_ring_4_104711 00:06:05.803 size: 1.000366 MiB name: RG_ring_5_104711 00:06:05.803 size: 0.125366 MiB name: RG_ring_2_104711 00:06:05.803 size: 0.015991 MiB name: RG_ring_3_104711 00:06:05.803 end memzones------- 00:06:05.803 02:47:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:05.803 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:05.803 list of free elements. size: 10.862488 MiB 00:06:05.803 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:05.803 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:05.803 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:05.803 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:05.803 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:05.803 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:05.803 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:05.803 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:05.803 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:05.803 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:05.803 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:05.803 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:05.803 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:05.803 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:05.803 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:05.803 list of standard malloc elements. size: 199.218628 MiB 00:06:05.803 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:05.803 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:05.803 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:05.803 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:05.803 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:05.803 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:05.803 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:05.803 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:05.804 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:05.804 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:05.804 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:05.804 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:05.804 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:05.804 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:05.804 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:05.804 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:05.804 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:05.804 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:05.804 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:05.804 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:05.804 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:05.804 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:05.804 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:05.804 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:05.804 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:05.804 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:05.804 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:05.804 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:05.804 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:05.804 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:05.804 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:05.804 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:05.804 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:05.804 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:05.804 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:05.804 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:05.804 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:05.804 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:05.804 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:05.804 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:05.804 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:05.804 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:05.804 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:05.804 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:05.804 list of memzone associated elements. size: 599.918884 MiB 00:06:05.804 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:05.804 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:05.804 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:05.804 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:05.804 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:05.804 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_104711_0 00:06:05.804 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:05.804 associated memzone info: size: 48.002930 MiB name: MP_msgpool_104711_0 00:06:05.804 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:05.804 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_104711_0 00:06:05.804 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:05.804 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:05.804 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:05.804 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:05.804 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:05.804 associated memzone info: size: 3.000122 MiB name: MP_evtpool_104711_0 00:06:05.804 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:05.804 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_104711 00:06:05.804 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:05.804 associated memzone info: size: 1.007996 MiB name: MP_evtpool_104711 00:06:05.804 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:05.804 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:05.804 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:05.804 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:05.804 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:05.804 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:05.804 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:05.804 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:05.804 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:05.804 associated memzone info: size: 1.000366 MiB name: RG_ring_0_104711 00:06:05.804 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:05.804 associated memzone info: size: 1.000366 MiB name: RG_ring_1_104711 00:06:05.804 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:05.804 associated memzone info: size: 1.000366 MiB name: RG_ring_4_104711 00:06:05.804 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:05.804 associated memzone info: size: 1.000366 MiB name: RG_ring_5_104711 00:06:05.804 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:05.804 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_104711 00:06:05.804 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:05.804 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_104711 00:06:05.804 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:05.804 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:05.804 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:05.804 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:05.804 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:05.804 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:05.804 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:05.804 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_104711 00:06:05.804 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:05.804 associated memzone info: size: 0.125366 MiB name: RG_ring_2_104711 00:06:05.804 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:05.804 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:05.804 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:05.804 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:05.804 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:05.804 associated memzone info: size: 0.015991 MiB name: RG_ring_3_104711 00:06:05.804 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:05.804 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:05.804 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:05.804 associated memzone info: size: 0.000183 MiB name: MP_msgpool_104711 00:06:05.804 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:05.804 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_104711 00:06:05.804 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:05.804 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_104711 00:06:05.804 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:05.804 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:05.804 02:47:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:05.804 02:47:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 104711 00:06:05.804 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 104711 ']' 00:06:05.804 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 104711 00:06:05.804 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:05.804 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.804 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104711 00:06:05.804 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.804 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.804 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104711' 00:06:05.804 killing process with pid 104711 00:06:05.804 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 104711 00:06:05.804 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 104711 00:06:06.371 00:06:06.371 real 0m1.072s 00:06:06.371 user 0m1.054s 00:06:06.371 sys 0m0.412s 00:06:06.371 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.371 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.371 ************************************ 00:06:06.371 END TEST dpdk_mem_utility 00:06:06.371 ************************************ 00:06:06.371 02:47:16 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:06.371 02:47:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.371 02:47:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.371 02:47:16 -- common/autotest_common.sh@10 -- # set +x 00:06:06.371 ************************************ 00:06:06.371 START TEST event 00:06:06.371 ************************************ 00:06:06.371 02:47:16 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:06.371 * Looking for test storage... 00:06:06.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:06.371 02:47:16 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.371 02:47:16 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.371 02:47:16 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.371 02:47:16 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.371 02:47:16 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.371 02:47:16 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.371 02:47:16 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.371 02:47:16 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.371 02:47:16 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.371 02:47:16 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.371 02:47:16 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.371 02:47:16 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.371 02:47:16 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.371 02:47:16 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.371 02:47:16 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.371 02:47:16 event -- scripts/common.sh@344 -- # case "$op" in 00:06:06.371 02:47:16 event -- scripts/common.sh@345 -- # : 1 00:06:06.371 02:47:16 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.371 02:47:16 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.371 02:47:16 event -- scripts/common.sh@365 -- # decimal 1 00:06:06.371 02:47:16 event -- scripts/common.sh@353 -- # local d=1 00:06:06.371 02:47:16 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.371 02:47:16 event -- scripts/common.sh@355 -- # echo 1 00:06:06.371 02:47:16 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.371 02:47:16 event -- scripts/common.sh@366 -- # decimal 2 00:06:06.371 02:47:16 event -- scripts/common.sh@353 -- # local d=2 00:06:06.371 02:47:16 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.371 02:47:16 event -- scripts/common.sh@355 -- # echo 2 00:06:06.371 02:47:16 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.371 02:47:16 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.371 02:47:16 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.371 02:47:16 event -- scripts/common.sh@368 -- # return 0 00:06:06.371 02:47:16 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.371 02:47:16 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.371 --rc genhtml_branch_coverage=1 00:06:06.371 --rc genhtml_function_coverage=1 00:06:06.371 --rc genhtml_legend=1 00:06:06.371 --rc geninfo_all_blocks=1 00:06:06.371 --rc geninfo_unexecuted_blocks=1 00:06:06.371 00:06:06.371 ' 00:06:06.371 02:47:16 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.371 --rc genhtml_branch_coverage=1 00:06:06.371 --rc genhtml_function_coverage=1 00:06:06.371 --rc genhtml_legend=1 00:06:06.371 --rc geninfo_all_blocks=1 00:06:06.371 --rc geninfo_unexecuted_blocks=1 00:06:06.371 00:06:06.371 ' 00:06:06.371 02:47:16 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.371 --rc genhtml_branch_coverage=1 00:06:06.371 --rc genhtml_function_coverage=1 00:06:06.371 --rc genhtml_legend=1 00:06:06.371 --rc geninfo_all_blocks=1 00:06:06.371 --rc geninfo_unexecuted_blocks=1 00:06:06.371 00:06:06.371 ' 00:06:06.371 02:47:16 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.371 --rc genhtml_branch_coverage=1 00:06:06.371 --rc genhtml_function_coverage=1 00:06:06.371 --rc genhtml_legend=1 00:06:06.371 --rc geninfo_all_blocks=1 00:06:06.371 --rc geninfo_unexecuted_blocks=1 00:06:06.371 00:06:06.371 ' 00:06:06.371 02:47:16 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:06.371 02:47:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:06.371 02:47:16 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.371 02:47:16 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:06.371 02:47:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.371 02:47:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.371 ************************************ 00:06:06.371 START TEST event_perf 00:06:06.371 ************************************ 00:06:06.371 02:47:16 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.371 Running I/O for 1 seconds...[2024-11-19 02:47:16.968080] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:06.371 [2024-11-19 02:47:16.968147] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104909 ] 00:06:06.630 [2024-11-19 02:47:17.036925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.630 [2024-11-19 02:47:17.088103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.630 [2024-11-19 02:47:17.088167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.630 [2024-11-19 02:47:17.088229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.630 [2024-11-19 02:47:17.088232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.568 Running I/O for 1 seconds... 00:06:07.568 lcore 0: 234299 00:06:07.568 lcore 1: 234298 00:06:07.568 lcore 2: 234298 00:06:07.568 lcore 3: 234299 00:06:07.568 done. 00:06:07.568 00:06:07.568 real 0m1.180s 00:06:07.568 user 0m4.107s 00:06:07.568 sys 0m0.069s 00:06:07.568 02:47:18 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.568 02:47:18 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.568 ************************************ 00:06:07.568 END TEST event_perf 00:06:07.568 ************************************ 00:06:07.568 02:47:18 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:07.568 02:47:18 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:07.568 02:47:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.568 02:47:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.568 ************************************ 00:06:07.568 START TEST event_reactor 00:06:07.568 ************************************ 00:06:07.568 02:47:18 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:07.828 [2024-11-19 02:47:18.198357] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:07.828 [2024-11-19 02:47:18.198422] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105065 ] 00:06:07.828 [2024-11-19 02:47:18.264079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.828 [2024-11-19 02:47:18.305960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.764 test_start 00:06:08.764 oneshot 00:06:08.764 tick 100 00:06:08.764 tick 100 00:06:08.764 tick 250 00:06:08.764 tick 100 00:06:08.764 tick 100 00:06:08.764 tick 100 00:06:08.764 tick 500 00:06:08.764 tick 250 00:06:08.764 tick 100 00:06:08.764 tick 100 00:06:08.764 tick 250 00:06:08.764 tick 100 00:06:08.764 tick 100 00:06:08.764 test_end 00:06:08.764 00:06:08.764 real 0m1.165s 00:06:08.764 user 0m1.099s 00:06:08.764 sys 0m0.062s 00:06:08.764 02:47:19 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.764 02:47:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:08.764 ************************************ 00:06:08.764 END TEST event_reactor 00:06:08.764 ************************************ 00:06:08.764 02:47:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:08.764 02:47:19 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:08.764 02:47:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.764 02:47:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.023 ************************************ 00:06:09.023 START TEST event_reactor_perf 00:06:09.023 ************************************ 00:06:09.024 02:47:19 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.024 [2024-11-19 02:47:19.413207] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:09.024 [2024-11-19 02:47:19.413275] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105217 ] 00:06:09.024 [2024-11-19 02:47:19.479502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.024 [2024-11-19 02:47:19.521300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.961 test_start 00:06:09.961 test_end 00:06:09.961 Performance: 429869 events per second 00:06:09.961 00:06:09.961 real 0m1.166s 00:06:09.961 user 0m1.097s 00:06:09.961 sys 0m0.063s 00:06:09.961 02:47:20 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.961 02:47:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.961 ************************************ 00:06:09.961 END TEST event_reactor_perf 00:06:09.961 ************************************ 00:06:10.221 02:47:20 event -- event/event.sh@49 -- # uname -s 00:06:10.221 02:47:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:10.221 02:47:20 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:10.221 02:47:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.221 02:47:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.221 02:47:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.221 ************************************ 00:06:10.221 START TEST event_scheduler 00:06:10.221 ************************************ 00:06:10.221 02:47:20 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:10.221 * Looking for test storage... 00:06:10.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:10.221 02:47:20 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.221 02:47:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.221 02:47:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.221 02:47:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.221 02:47:20 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:10.221 02:47:20 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.221 02:47:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.221 --rc genhtml_branch_coverage=1 00:06:10.221 --rc genhtml_function_coverage=1 00:06:10.221 --rc genhtml_legend=1 00:06:10.221 --rc geninfo_all_blocks=1 00:06:10.221 --rc geninfo_unexecuted_blocks=1 00:06:10.221 00:06:10.221 ' 00:06:10.221 02:47:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.221 --rc genhtml_branch_coverage=1 00:06:10.221 --rc genhtml_function_coverage=1 00:06:10.221 --rc genhtml_legend=1 00:06:10.221 --rc geninfo_all_blocks=1 00:06:10.221 --rc geninfo_unexecuted_blocks=1 00:06:10.221 00:06:10.221 ' 00:06:10.221 02:47:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.221 --rc genhtml_branch_coverage=1 00:06:10.221 --rc genhtml_function_coverage=1 00:06:10.221 --rc genhtml_legend=1 00:06:10.221 --rc geninfo_all_blocks=1 00:06:10.221 --rc geninfo_unexecuted_blocks=1 00:06:10.221 00:06:10.221 ' 00:06:10.221 02:47:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.221 --rc genhtml_branch_coverage=1 00:06:10.221 --rc genhtml_function_coverage=1 00:06:10.221 --rc genhtml_legend=1 00:06:10.221 --rc geninfo_all_blocks=1 00:06:10.221 --rc geninfo_unexecuted_blocks=1 00:06:10.221 00:06:10.221 ' 00:06:10.221 02:47:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:10.221 02:47:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=105409 00:06:10.221 02:47:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:10.221 02:47:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.221 02:47:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 105409 00:06:10.221 02:47:20 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 105409 ']' 00:06:10.221 02:47:20 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.221 02:47:20 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.221 02:47:20 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.221 02:47:20 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.221 02:47:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.221 [2024-11-19 02:47:20.814054] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:10.221 [2024-11-19 02:47:20.814142] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105409 ] 00:06:10.481 [2024-11-19 02:47:20.880038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.481 [2024-11-19 02:47:20.928219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.481 [2024-11-19 02:47:20.928286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.481 [2024-11-19 02:47:20.928394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.481 [2024-11-19 02:47:20.928397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.481 02:47:21 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.481 02:47:21 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:10.481 02:47:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:10.481 02:47:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.481 02:47:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.481 [2024-11-19 02:47:21.037415] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:10.481 [2024-11-19 02:47:21.037441] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:10.481 [2024-11-19 02:47:21.037457] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:10.481 [2024-11-19 02:47:21.037483] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:10.481 [2024-11-19 02:47:21.037493] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:10.481 02:47:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.481 02:47:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:10.481 02:47:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.481 02:47:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.740 [2024-11-19 02:47:21.135594] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:10.740 02:47:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.740 02:47:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:10.740 02:47:21 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.740 02:47:21 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.740 02:47:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.740 ************************************ 00:06:10.740 START TEST scheduler_create_thread 00:06:10.740 ************************************ 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.740 2 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.740 3 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.740 4 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.740 5 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.740 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.741 6 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.741 7 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.741 8 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.741 9 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.741 10 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.741 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.307 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.307 00:06:11.307 real 0m0.591s 00:06:11.307 user 0m0.009s 00:06:11.307 sys 0m0.005s 00:06:11.307 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.307 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.307 ************************************ 00:06:11.307 END TEST scheduler_create_thread 00:06:11.307 ************************************ 00:06:11.307 02:47:21 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:11.307 02:47:21 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 105409 00:06:11.307 02:47:21 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 105409 ']' 00:06:11.307 02:47:21 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 105409 00:06:11.307 02:47:21 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:11.307 02:47:21 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.307 02:47:21 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105409 00:06:11.307 02:47:21 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:11.307 02:47:21 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:11.307 02:47:21 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105409' 00:06:11.307 killing process with pid 105409 00:06:11.307 02:47:21 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 105409 00:06:11.307 02:47:21 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 105409 00:06:11.874 [2024-11-19 02:47:22.235649] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:11.874 00:06:11.874 real 0m1.800s 00:06:11.874 user 0m2.440s 00:06:11.874 sys 0m0.357s 00:06:11.874 02:47:22 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.874 02:47:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.874 ************************************ 00:06:11.874 END TEST event_scheduler 00:06:11.874 ************************************ 00:06:11.874 02:47:22 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:11.874 02:47:22 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:11.874 02:47:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.874 02:47:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.874 02:47:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.874 ************************************ 00:06:11.874 START TEST app_repeat 00:06:11.874 ************************************ 00:06:11.874 02:47:22 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:11.874 02:47:22 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.874 02:47:22 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.874 02:47:22 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:11.874 02:47:22 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.874 02:47:22 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:11.874 02:47:22 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:11.874 02:47:22 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:11.874 02:47:22 event.app_repeat -- event/event.sh@19 -- # repeat_pid=105719 00:06:11.874 02:47:22 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:11.874 02:47:22 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.874 02:47:22 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 105719' 00:06:11.874 Process app_repeat pid: 105719 00:06:11.874 02:47:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.874 02:47:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:11.874 spdk_app_start Round 0 00:06:11.874 02:47:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105719 /var/tmp/spdk-nbd.sock 00:06:11.874 02:47:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105719 ']' 00:06:11.874 02:47:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.874 02:47:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.874 02:47:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.874 02:47:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.874 02:47:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.134 [2024-11-19 02:47:22.499505] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:12.134 [2024-11-19 02:47:22.499573] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105719 ] 00:06:12.134 [2024-11-19 02:47:22.565171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.134 [2024-11-19 02:47:22.613141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.134 [2024-11-19 02:47:22.613145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.392 02:47:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.392 02:47:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:12.392 02:47:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.651 Malloc0 00:06:12.651 02:47:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.910 Malloc1 00:06:12.910 02:47:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.910 02:47:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.910 02:47:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.910 02:47:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.910 02:47:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.910 02:47:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.910 02:47:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.910 02:47:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.910 02:47:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.910 02:47:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.910 02:47:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.910 02:47:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.910 02:47:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.910 02:47:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.910 02:47:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.910 02:47:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.168 /dev/nbd0 00:06:13.168 02:47:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.168 02:47:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.168 02:47:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:13.169 02:47:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:13.169 02:47:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:13.169 02:47:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:13.169 02:47:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:13.169 02:47:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:13.169 02:47:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:13.169 02:47:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:13.169 02:47:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.169 1+0 records in 00:06:13.169 1+0 records out 00:06:13.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245727 s, 16.7 MB/s 00:06:13.169 02:47:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.169 02:47:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:13.169 02:47:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.169 02:47:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:13.169 02:47:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:13.169 02:47:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.169 02:47:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.169 02:47:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.427 /dev/nbd1 00:06:13.427 02:47:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.427 02:47:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.427 02:47:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:13.427 02:47:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:13.427 02:47:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:13.427 02:47:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:13.427 02:47:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:13.427 02:47:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:13.427 02:47:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:13.427 02:47:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:13.427 02:47:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.687 1+0 records in 00:06:13.687 1+0 records out 00:06:13.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235409 s, 17.4 MB/s 00:06:13.687 02:47:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.687 02:47:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:13.687 02:47:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.687 02:47:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:13.687 02:47:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:13.687 02:47:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.687 02:47:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.687 02:47:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.687 02:47:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.687 02:47:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.946 { 00:06:13.946 "nbd_device": "/dev/nbd0", 00:06:13.946 "bdev_name": "Malloc0" 00:06:13.946 }, 00:06:13.946 { 00:06:13.946 "nbd_device": "/dev/nbd1", 00:06:13.946 "bdev_name": "Malloc1" 00:06:13.946 } 00:06:13.946 ]' 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.946 { 00:06:13.946 "nbd_device": "/dev/nbd0", 00:06:13.946 "bdev_name": "Malloc0" 00:06:13.946 }, 00:06:13.946 { 00:06:13.946 "nbd_device": "/dev/nbd1", 00:06:13.946 "bdev_name": "Malloc1" 00:06:13.946 } 00:06:13.946 ]' 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.946 /dev/nbd1' 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.946 /dev/nbd1' 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.946 256+0 records in 00:06:13.946 256+0 records out 00:06:13.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00408377 s, 257 MB/s 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.946 256+0 records in 00:06:13.946 256+0 records out 00:06:13.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202795 s, 51.7 MB/s 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.946 256+0 records in 00:06:13.946 256+0 records out 00:06:13.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224574 s, 46.7 MB/s 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.946 02:47:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.204 02:47:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.204 02:47:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.204 02:47:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.204 02:47:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.204 02:47:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.204 02:47:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.204 02:47:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.204 02:47:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.204 02:47:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.204 02:47:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.463 02:47:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.463 02:47:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.463 02:47:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.463 02:47:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.463 02:47:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.463 02:47:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.463 02:47:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.463 02:47:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.463 02:47:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.463 02:47:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.463 02:47:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.721 02:47:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.721 02:47:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.721 02:47:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.979 02:47:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.979 02:47:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.979 02:47:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.979 02:47:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.979 02:47:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.979 02:47:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.979 02:47:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.979 02:47:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.979 02:47:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.979 02:47:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.237 02:47:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.237 [2024-11-19 02:47:25.835776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.496 [2024-11-19 02:47:25.880473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.496 [2024-11-19 02:47:25.880473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.496 [2024-11-19 02:47:25.937883] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.496 [2024-11-19 02:47:25.937954] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.781 02:47:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.781 02:47:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:18.781 spdk_app_start Round 1 00:06:18.781 02:47:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105719 /var/tmp/spdk-nbd.sock 00:06:18.781 02:47:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105719 ']' 00:06:18.781 02:47:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.781 02:47:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.781 02:47:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.781 02:47:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.781 02:47:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.781 02:47:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.781 02:47:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:18.781 02:47:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.781 Malloc0 00:06:18.781 02:47:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.039 Malloc1 00:06:19.039 02:47:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.039 02:47:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.039 02:47:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.039 02:47:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.039 02:47:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.039 02:47:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.039 02:47:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.039 02:47:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.039 02:47:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.039 02:47:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.039 02:47:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.039 02:47:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.039 02:47:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:19.039 02:47:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.039 02:47:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.039 02:47:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.297 /dev/nbd0 00:06:19.297 02:47:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.297 02:47:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.297 02:47:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:19.297 02:47:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:19.297 02:47:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.297 02:47:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.297 02:47:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:19.297 02:47:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:19.297 02:47:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.297 02:47:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.297 02:47:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.297 1+0 records in 00:06:19.297 1+0 records out 00:06:19.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294294 s, 13.9 MB/s 00:06:19.297 02:47:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.297 02:47:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:19.297 02:47:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.297 02:47:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.297 02:47:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:19.297 02:47:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.297 02:47:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.297 02:47:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.555 /dev/nbd1 00:06:19.555 02:47:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.555 02:47:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.555 02:47:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:19.555 02:47:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:19.555 02:47:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.555 02:47:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.555 02:47:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:19.555 02:47:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:19.555 02:47:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.555 02:47:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.555 02:47:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.555 1+0 records in 00:06:19.555 1+0 records out 00:06:19.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020648 s, 19.8 MB/s 00:06:19.555 02:47:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.555 02:47:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:19.555 02:47:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.555 02:47:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.555 02:47:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:19.555 02:47:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.555 02:47:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.555 02:47:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.555 02:47:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.555 02:47:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.122 { 00:06:20.122 "nbd_device": "/dev/nbd0", 00:06:20.122 "bdev_name": "Malloc0" 00:06:20.122 }, 00:06:20.122 { 00:06:20.122 "nbd_device": "/dev/nbd1", 00:06:20.122 "bdev_name": "Malloc1" 00:06:20.122 } 00:06:20.122 ]' 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.122 { 00:06:20.122 "nbd_device": "/dev/nbd0", 00:06:20.122 "bdev_name": "Malloc0" 00:06:20.122 }, 00:06:20.122 { 00:06:20.122 "nbd_device": "/dev/nbd1", 00:06:20.122 "bdev_name": "Malloc1" 00:06:20.122 } 00:06:20.122 ]' 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.122 /dev/nbd1' 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.122 /dev/nbd1' 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.122 256+0 records in 00:06:20.122 256+0 records out 00:06:20.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488664 s, 215 MB/s 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.122 256+0 records in 00:06:20.122 256+0 records out 00:06:20.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204464 s, 51.3 MB/s 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.122 256+0 records in 00:06:20.122 256+0 records out 00:06:20.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022046 s, 47.6 MB/s 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.122 02:47:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.380 02:47:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.380 02:47:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.380 02:47:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.380 02:47:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.380 02:47:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.380 02:47:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.380 02:47:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.380 02:47:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.380 02:47:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.380 02:47:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.638 02:47:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.638 02:47:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.638 02:47:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.638 02:47:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.638 02:47:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.638 02:47:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.638 02:47:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.638 02:47:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.638 02:47:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.638 02:47:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.638 02:47:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.897 02:47:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.897 02:47:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.897 02:47:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.897 02:47:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.897 02:47:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.897 02:47:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.897 02:47:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.897 02:47:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.897 02:47:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.897 02:47:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.897 02:47:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.897 02:47:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.897 02:47:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.155 02:47:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:21.414 [2024-11-19 02:47:31.917074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.414 [2024-11-19 02:47:31.962894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.414 [2024-11-19 02:47:31.962894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.414 [2024-11-19 02:47:32.021521] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.414 [2024-11-19 02:47:32.021594] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.696 02:47:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.696 02:47:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:24.696 spdk_app_start Round 2 00:06:24.696 02:47:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105719 /var/tmp/spdk-nbd.sock 00:06:24.696 02:47:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105719 ']' 00:06:24.696 02:47:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.696 02:47:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.696 02:47:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.696 02:47:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.696 02:47:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.696 02:47:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.696 02:47:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:24.696 02:47:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.696 Malloc0 00:06:24.696 02:47:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.954 Malloc1 00:06:24.954 02:47:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.954 02:47:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.954 02:47:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.954 02:47:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.955 02:47:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.955 02:47:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.955 02:47:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.955 02:47:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.955 02:47:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.955 02:47:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.955 02:47:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.955 02:47:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.955 02:47:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.955 02:47:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.955 02:47:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.955 02:47:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:25.520 /dev/nbd0 00:06:25.521 02:47:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.521 02:47:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.521 02:47:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:25.521 02:47:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:25.521 02:47:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:25.521 02:47:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:25.521 02:47:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:25.521 02:47:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:25.521 02:47:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:25.521 02:47:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:25.521 02:47:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.521 1+0 records in 00:06:25.521 1+0 records out 00:06:25.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018081 s, 22.7 MB/s 00:06:25.521 02:47:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.521 02:47:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:25.521 02:47:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.521 02:47:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:25.521 02:47:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:25.521 02:47:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.521 02:47:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.521 02:47:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:25.779 /dev/nbd1 00:06:25.779 02:47:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:25.779 02:47:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:25.779 02:47:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:25.779 02:47:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:25.779 02:47:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:25.779 02:47:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:25.779 02:47:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:25.779 02:47:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:25.779 02:47:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:25.779 02:47:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:25.779 02:47:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.779 1+0 records in 00:06:25.779 1+0 records out 00:06:25.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232469 s, 17.6 MB/s 00:06:25.779 02:47:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.779 02:47:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:25.779 02:47:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.779 02:47:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:25.779 02:47:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:25.779 02:47:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.779 02:47:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.779 02:47:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.779 02:47:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.779 02:47:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.038 { 00:06:26.038 "nbd_device": "/dev/nbd0", 00:06:26.038 "bdev_name": "Malloc0" 00:06:26.038 }, 00:06:26.038 { 00:06:26.038 "nbd_device": "/dev/nbd1", 00:06:26.038 "bdev_name": "Malloc1" 00:06:26.038 } 00:06:26.038 ]' 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.038 { 00:06:26.038 "nbd_device": "/dev/nbd0", 00:06:26.038 "bdev_name": "Malloc0" 00:06:26.038 }, 00:06:26.038 { 00:06:26.038 "nbd_device": "/dev/nbd1", 00:06:26.038 "bdev_name": "Malloc1" 00:06:26.038 } 00:06:26.038 ]' 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.038 /dev/nbd1' 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.038 /dev/nbd1' 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.038 256+0 records in 00:06:26.038 256+0 records out 00:06:26.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00385217 s, 272 MB/s 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.038 256+0 records in 00:06:26.038 256+0 records out 00:06:26.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020217 s, 51.9 MB/s 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.038 256+0 records in 00:06:26.038 256+0 records out 00:06:26.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218019 s, 48.1 MB/s 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.038 02:47:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.297 02:47:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.297 02:47:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.297 02:47:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.297 02:47:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.297 02:47:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.297 02:47:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.297 02:47:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.297 02:47:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.297 02:47:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.297 02:47:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:26.863 02:47:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:26.863 02:47:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:26.863 02:47:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:26.863 02:47:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.863 02:47:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.863 02:47:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:26.863 02:47:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.863 02:47:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.863 02:47:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.863 02:47:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.863 02:47:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.863 02:47:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.863 02:47:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.863 02:47:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.120 02:47:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.121 02:47:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.121 02:47:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.121 02:47:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.121 02:47:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.121 02:47:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.121 02:47:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.121 02:47:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.121 02:47:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.121 02:47:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:27.387 02:47:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:27.387 [2024-11-19 02:47:37.985974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.646 [2024-11-19 02:47:38.032894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.646 [2024-11-19 02:47:38.032899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.646 [2024-11-19 02:47:38.088109] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.646 [2024-11-19 02:47:38.088175] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:30.933 02:47:40 event.app_repeat -- event/event.sh@38 -- # waitforlisten 105719 /var/tmp/spdk-nbd.sock 00:06:30.933 02:47:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105719 ']' 00:06:30.933 02:47:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.933 02:47:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.933 02:47:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.933 02:47:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.933 02:47:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:30.933 02:47:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.933 02:47:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:30.933 02:47:41 event.app_repeat -- event/event.sh@39 -- # killprocess 105719 00:06:30.933 02:47:41 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 105719 ']' 00:06:30.933 02:47:41 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 105719 00:06:30.933 02:47:41 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:30.933 02:47:41 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.933 02:47:41 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105719 00:06:30.933 02:47:41 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.933 02:47:41 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.933 02:47:41 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105719' 00:06:30.933 killing process with pid 105719 00:06:30.933 02:47:41 event.app_repeat -- common/autotest_common.sh@973 -- # kill 105719 00:06:30.933 02:47:41 event.app_repeat -- common/autotest_common.sh@978 -- # wait 105719 00:06:30.933 spdk_app_start is called in Round 0. 00:06:30.933 Shutdown signal received, stop current app iteration 00:06:30.933 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 reinitialization... 00:06:30.933 spdk_app_start is called in Round 1. 00:06:30.933 Shutdown signal received, stop current app iteration 00:06:30.933 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 reinitialization... 00:06:30.933 spdk_app_start is called in Round 2. 00:06:30.933 Shutdown signal received, stop current app iteration 00:06:30.933 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 reinitialization... 00:06:30.933 spdk_app_start is called in Round 3. 00:06:30.933 Shutdown signal received, stop current app iteration 00:06:30.933 02:47:41 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:30.933 02:47:41 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:30.933 00:06:30.933 real 0m18.797s 00:06:30.933 user 0m41.635s 00:06:30.933 sys 0m3.316s 00:06:30.933 02:47:41 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.933 02:47:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:30.933 ************************************ 00:06:30.933 END TEST app_repeat 00:06:30.933 ************************************ 00:06:30.933 02:47:41 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:30.933 02:47:41 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:30.933 02:47:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.933 02:47:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.933 02:47:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.933 ************************************ 00:06:30.933 START TEST cpu_locks 00:06:30.933 ************************************ 00:06:30.933 02:47:41 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:30.933 * Looking for test storage... 00:06:30.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:30.933 02:47:41 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:30.933 02:47:41 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:30.933 02:47:41 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.933 02:47:41 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.933 02:47:41 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:30.933 02:47:41 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.933 02:47:41 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.933 --rc genhtml_branch_coverage=1 00:06:30.933 --rc genhtml_function_coverage=1 00:06:30.933 --rc genhtml_legend=1 00:06:30.933 --rc geninfo_all_blocks=1 00:06:30.933 --rc geninfo_unexecuted_blocks=1 00:06:30.933 00:06:30.933 ' 00:06:30.933 02:47:41 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.933 --rc genhtml_branch_coverage=1 00:06:30.933 --rc genhtml_function_coverage=1 00:06:30.933 --rc genhtml_legend=1 00:06:30.933 --rc geninfo_all_blocks=1 00:06:30.933 --rc geninfo_unexecuted_blocks=1 00:06:30.933 00:06:30.933 ' 00:06:30.933 02:47:41 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.933 --rc genhtml_branch_coverage=1 00:06:30.933 --rc genhtml_function_coverage=1 00:06:30.933 --rc genhtml_legend=1 00:06:30.933 --rc geninfo_all_blocks=1 00:06:30.933 --rc geninfo_unexecuted_blocks=1 00:06:30.933 00:06:30.933 ' 00:06:30.933 02:47:41 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.933 --rc genhtml_branch_coverage=1 00:06:30.933 --rc genhtml_function_coverage=1 00:06:30.933 --rc genhtml_legend=1 00:06:30.933 --rc geninfo_all_blocks=1 00:06:30.933 --rc geninfo_unexecuted_blocks=1 00:06:30.933 00:06:30.933 ' 00:06:30.933 02:47:41 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:30.933 02:47:41 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:30.933 02:47:41 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:30.933 02:47:41 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:30.933 02:47:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.933 02:47:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.933 02:47:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.933 ************************************ 00:06:30.933 START TEST default_locks 00:06:30.933 ************************************ 00:06:30.933 02:47:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:30.933 02:47:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=108183 00:06:30.933 02:47:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.933 02:47:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 108183 00:06:30.933 02:47:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 108183 ']' 00:06:30.933 02:47:41 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.933 02:47:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.933 02:47:41 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.933 02:47:41 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.934 02:47:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.192 [2024-11-19 02:47:41.558790] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:31.192 [2024-11-19 02:47:41.558882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108183 ] 00:06:31.193 [2024-11-19 02:47:41.626096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.193 [2024-11-19 02:47:41.672249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.451 02:47:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.451 02:47:41 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:31.451 02:47:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 108183 00:06:31.451 02:47:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 108183 00:06:31.451 02:47:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.710 lslocks: write error 00:06:31.710 02:47:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 108183 00:06:31.710 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 108183 ']' 00:06:31.710 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 108183 00:06:31.710 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:31.710 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.710 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108183 00:06:31.710 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.710 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.710 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108183' 00:06:31.710 killing process with pid 108183 00:06:31.710 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 108183 00:06:31.710 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 108183 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 108183 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 108183 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 108183 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 108183 ']' 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (108183) - No such process 00:06:31.970 ERROR: process (pid: 108183) is no longer running 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:31.970 00:06:31.970 real 0m1.072s 00:06:31.970 user 0m1.046s 00:06:31.970 sys 0m0.485s 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.970 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.970 ************************************ 00:06:31.970 END TEST default_locks 00:06:31.970 ************************************ 00:06:32.229 02:47:42 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:32.229 02:47:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.229 02:47:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.229 02:47:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.229 ************************************ 00:06:32.229 START TEST default_locks_via_rpc 00:06:32.229 ************************************ 00:06:32.229 02:47:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:32.229 02:47:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=108325 00:06:32.229 02:47:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 108325 00:06:32.229 02:47:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.229 02:47:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 108325 ']' 00:06:32.229 02:47:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.229 02:47:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.229 02:47:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.229 02:47:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.229 02:47:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.229 [2024-11-19 02:47:42.685384] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:32.229 [2024-11-19 02:47:42.685474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108325 ] 00:06:32.229 [2024-11-19 02:47:42.750733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.229 [2024-11-19 02:47:42.796652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.488 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.488 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:32.488 02:47:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:32.488 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.488 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.488 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.488 02:47:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:32.488 02:47:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:32.488 02:47:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:32.488 02:47:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:32.489 02:47:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:32.489 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.489 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.489 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.489 02:47:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 108325 00:06:32.489 02:47:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 108325 00:06:32.489 02:47:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.747 02:47:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 108325 00:06:32.747 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 108325 ']' 00:06:32.747 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 108325 00:06:32.747 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:32.747 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.747 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108325 00:06:33.006 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.006 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.006 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108325' 00:06:33.006 killing process with pid 108325 00:06:33.006 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 108325 00:06:33.006 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 108325 00:06:33.266 00:06:33.266 real 0m1.148s 00:06:33.266 user 0m1.118s 00:06:33.266 sys 0m0.492s 00:06:33.266 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.266 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.266 ************************************ 00:06:33.266 END TEST default_locks_via_rpc 00:06:33.266 ************************************ 00:06:33.266 02:47:43 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:33.266 02:47:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.266 02:47:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.266 02:47:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.266 ************************************ 00:06:33.266 START TEST non_locking_app_on_locked_coremask 00:06:33.266 ************************************ 00:06:33.266 02:47:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:33.266 02:47:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=108536 00:06:33.266 02:47:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.266 02:47:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 108536 /var/tmp/spdk.sock 00:06:33.266 02:47:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108536 ']' 00:06:33.266 02:47:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.266 02:47:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.266 02:47:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.266 02:47:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.266 02:47:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.266 [2024-11-19 02:47:43.880419] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:33.266 [2024-11-19 02:47:43.880507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108536 ] 00:06:33.525 [2024-11-19 02:47:43.947645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.525 [2024-11-19 02:47:43.992174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.784 02:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.784 02:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:33.784 02:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=108551 00:06:33.784 02:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:33.784 02:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 108551 /var/tmp/spdk2.sock 00:06:33.784 02:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108551 ']' 00:06:33.784 02:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.784 02:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.784 02:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.784 02:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.784 02:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.784 [2024-11-19 02:47:44.294185] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:33.784 [2024-11-19 02:47:44.294273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108551 ] 00:06:33.784 [2024-11-19 02:47:44.391029] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.784 [2024-11-19 02:47:44.391054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.043 [2024-11-19 02:47:44.480132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.610 02:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.610 02:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:34.610 02:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 108536 00:06:34.610 02:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108536 00:06:34.610 02:47:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.869 lslocks: write error 00:06:34.869 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 108536 00:06:34.869 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108536 ']' 00:06:34.869 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 108536 00:06:34.869 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.869 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.869 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108536 00:06:34.869 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.869 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.869 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108536' 00:06:34.869 killing process with pid 108536 00:06:34.869 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 108536 00:06:34.869 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 108536 00:06:35.804 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 108551 00:06:35.804 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108551 ']' 00:06:35.804 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 108551 00:06:35.804 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:35.804 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.804 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108551 00:06:35.804 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.804 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.804 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108551' 00:06:35.804 killing process with pid 108551 00:06:35.804 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 108551 00:06:35.804 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 108551 00:06:36.065 00:06:36.065 real 0m2.743s 00:06:36.065 user 0m2.748s 00:06:36.065 sys 0m0.962s 00:06:36.065 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.065 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.065 ************************************ 00:06:36.065 END TEST non_locking_app_on_locked_coremask 00:06:36.065 ************************************ 00:06:36.066 02:47:46 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:36.066 02:47:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.066 02:47:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.066 02:47:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.066 ************************************ 00:06:36.066 START TEST locking_app_on_unlocked_coremask 00:06:36.066 ************************************ 00:06:36.066 02:47:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:36.066 02:47:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=108847 00:06:36.066 02:47:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:36.066 02:47:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 108847 /var/tmp/spdk.sock 00:06:36.066 02:47:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108847 ']' 00:06:36.066 02:47:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.066 02:47:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.066 02:47:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.066 02:47:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.066 02:47:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.066 [2024-11-19 02:47:46.677839] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:36.066 [2024-11-19 02:47:46.677933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108847 ] 00:06:36.325 [2024-11-19 02:47:46.741408] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.325 [2024-11-19 02:47:46.741446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.325 [2024-11-19 02:47:46.782586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.584 02:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.584 02:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:36.584 02:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=108852 00:06:36.584 02:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.584 02:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 108852 /var/tmp/spdk2.sock 00:06:36.585 02:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108852 ']' 00:06:36.585 02:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.585 02:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.585 02:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.585 02:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.585 02:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.585 [2024-11-19 02:47:47.088185] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:36.585 [2024-11-19 02:47:47.088267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108852 ] 00:06:36.585 [2024-11-19 02:47:47.191583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.844 [2024-11-19 02:47:47.280976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.411 02:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.411 02:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:37.411 02:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 108852 00:06:37.411 02:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108852 00:06:37.411 02:47:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.669 lslocks: write error 00:06:37.669 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 108847 00:06:37.669 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108847 ']' 00:06:37.669 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 108847 00:06:37.669 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:37.669 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.669 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108847 00:06:37.669 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.669 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.669 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108847' 00:06:37.669 killing process with pid 108847 00:06:37.669 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 108847 00:06:37.669 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 108847 00:06:38.605 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 108852 00:06:38.605 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108852 ']' 00:06:38.605 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 108852 00:06:38.605 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:38.605 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.605 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108852 00:06:38.605 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.605 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.605 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108852' 00:06:38.605 killing process with pid 108852 00:06:38.605 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 108852 00:06:38.605 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 108852 00:06:38.866 00:06:38.866 real 0m2.800s 00:06:38.866 user 0m2.834s 00:06:38.866 sys 0m1.002s 00:06:38.866 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.866 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.866 ************************************ 00:06:38.866 END TEST locking_app_on_unlocked_coremask 00:06:38.866 ************************************ 00:06:38.866 02:47:49 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:38.866 02:47:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.866 02:47:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.866 02:47:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.866 ************************************ 00:06:38.866 START TEST locking_app_on_locked_coremask 00:06:38.866 ************************************ 00:06:38.866 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:38.866 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=109274 00:06:38.866 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.866 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 109274 /var/tmp/spdk.sock 00:06:38.866 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 109274 ']' 00:06:38.866 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.866 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.866 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.866 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.866 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.126 [2024-11-19 02:47:49.524826] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:39.126 [2024-11-19 02:47:49.524914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109274 ] 00:06:39.126 [2024-11-19 02:47:49.588189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.126 [2024-11-19 02:47:49.631028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=109284 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 109284 /var/tmp/spdk2.sock 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 109284 /var/tmp/spdk2.sock 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 109284 /var/tmp/spdk2.sock 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 109284 ']' 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.385 02:47:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.385 [2024-11-19 02:47:49.939388] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:39.385 [2024-11-19 02:47:49.939473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109284 ] 00:06:39.643 [2024-11-19 02:47:50.042201] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 109274 has claimed it. 00:06:39.643 [2024-11-19 02:47:50.042277] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:40.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (109284) - No such process 00:06:40.209 ERROR: process (pid: 109284) is no longer running 00:06:40.209 02:47:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.209 02:47:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:40.209 02:47:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:40.210 02:47:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.210 02:47:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:40.210 02:47:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.210 02:47:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 109274 00:06:40.210 02:47:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 109274 00:06:40.210 02:47:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.468 lslocks: write error 00:06:40.468 02:47:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 109274 00:06:40.468 02:47:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 109274 ']' 00:06:40.468 02:47:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 109274 00:06:40.468 02:47:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:40.468 02:47:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.468 02:47:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109274 00:06:40.468 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.468 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.468 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109274' 00:06:40.468 killing process with pid 109274 00:06:40.468 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 109274 00:06:40.468 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 109274 00:06:41.035 00:06:41.035 real 0m1.937s 00:06:41.035 user 0m2.174s 00:06:41.035 sys 0m0.623s 00:06:41.035 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.035 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.035 ************************************ 00:06:41.035 END TEST locking_app_on_locked_coremask 00:06:41.035 ************************************ 00:06:41.035 02:47:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:41.035 02:47:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.035 02:47:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.035 02:47:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.035 ************************************ 00:06:41.035 START TEST locking_overlapped_coremask 00:06:41.035 ************************************ 00:06:41.035 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:41.035 02:47:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=109452 00:06:41.035 02:47:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:41.035 02:47:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 109452 /var/tmp/spdk.sock 00:06:41.035 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 109452 ']' 00:06:41.035 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.035 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.035 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.035 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.035 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.035 [2024-11-19 02:47:51.513784] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:41.035 [2024-11-19 02:47:51.513873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109452 ] 00:06:41.035 [2024-11-19 02:47:51.580863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.035 [2024-11-19 02:47:51.630524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.035 [2024-11-19 02:47:51.630587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.035 [2024-11-19 02:47:51.630591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=109577 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 109577 /var/tmp/spdk2.sock 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 109577 /var/tmp/spdk2.sock 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 109577 /var/tmp/spdk2.sock 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 109577 ']' 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.295 02:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.554 [2024-11-19 02:47:51.951621] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:41.554 [2024-11-19 02:47:51.951732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109577 ] 00:06:41.554 [2024-11-19 02:47:52.058639] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109452 has claimed it. 00:06:41.554 [2024-11-19 02:47:52.058721] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:42.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (109577) - No such process 00:06:42.121 ERROR: process (pid: 109577) is no longer running 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 109452 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 109452 ']' 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 109452 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109452 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109452' 00:06:42.121 killing process with pid 109452 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 109452 00:06:42.121 02:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 109452 00:06:42.689 00:06:42.689 real 0m1.626s 00:06:42.689 user 0m4.603s 00:06:42.689 sys 0m0.460s 00:06:42.689 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.689 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.689 ************************************ 00:06:42.689 END TEST locking_overlapped_coremask 00:06:42.689 ************************************ 00:06:42.689 02:47:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:42.689 02:47:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.689 02:47:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.689 02:47:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.689 ************************************ 00:06:42.689 START TEST locking_overlapped_coremask_via_rpc 00:06:42.689 ************************************ 00:06:42.689 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:42.689 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=109745 00:06:42.689 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:42.689 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 109745 /var/tmp/spdk.sock 00:06:42.689 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109745 ']' 00:06:42.689 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.689 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.689 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.689 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.689 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.689 [2024-11-19 02:47:53.194409] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:42.689 [2024-11-19 02:47:53.194512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109745 ] 00:06:42.689 [2024-11-19 02:47:53.261261] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.690 [2024-11-19 02:47:53.261306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.949 [2024-11-19 02:47:53.313698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.949 [2024-11-19 02:47:53.314708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.949 [2024-11-19 02:47:53.314721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.208 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.208 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:43.208 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=109757 00:06:43.208 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 109757 /var/tmp/spdk2.sock 00:06:43.208 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109757 ']' 00:06:43.208 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.208 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.208 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.208 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:43.208 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.208 02:47:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.208 [2024-11-19 02:47:53.637129] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:43.208 [2024-11-19 02:47:53.637215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109757 ] 00:06:43.208 [2024-11-19 02:47:53.741882] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.208 [2024-11-19 02:47:53.741916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.467 [2024-11-19 02:47:53.839917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.467 [2024-11-19 02:47:53.843746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:43.467 [2024-11-19 02:47:53.843749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.034 [2024-11-19 02:47:54.628782] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109745 has claimed it. 00:06:44.034 request: 00:06:44.034 { 00:06:44.034 "method": "framework_enable_cpumask_locks", 00:06:44.034 "req_id": 1 00:06:44.034 } 00:06:44.034 Got JSON-RPC error response 00:06:44.034 response: 00:06:44.034 { 00:06:44.034 "code": -32603, 00:06:44.034 "message": "Failed to claim CPU core: 2" 00:06:44.034 } 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 109745 /var/tmp/spdk.sock 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109745 ']' 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.034 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.035 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.035 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.035 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.292 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.292 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:44.292 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 109757 /var/tmp/spdk2.sock 00:06:44.292 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109757 ']' 00:06:44.292 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.292 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.292 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.292 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.292 02:47:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.858 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.858 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:44.858 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:44.858 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:44.858 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:44.858 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:44.858 00:06:44.858 real 0m2.040s 00:06:44.858 user 0m1.151s 00:06:44.858 sys 0m0.174s 00:06:44.858 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.858 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.858 ************************************ 00:06:44.858 END TEST locking_overlapped_coremask_via_rpc 00:06:44.858 ************************************ 00:06:44.858 02:47:55 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:44.858 02:47:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 109745 ]] 00:06:44.858 02:47:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 109745 00:06:44.858 02:47:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109745 ']' 00:06:44.858 02:47:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109745 00:06:44.858 02:47:55 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:44.858 02:47:55 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.858 02:47:55 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109745 00:06:44.858 02:47:55 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.858 02:47:55 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.858 02:47:55 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109745' 00:06:44.858 killing process with pid 109745 00:06:44.858 02:47:55 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 109745 00:06:44.858 02:47:55 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 109745 00:06:45.117 02:47:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 109757 ]] 00:06:45.117 02:47:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 109757 00:06:45.117 02:47:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109757 ']' 00:06:45.117 02:47:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109757 00:06:45.117 02:47:55 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:45.117 02:47:55 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.117 02:47:55 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109757 00:06:45.117 02:47:55 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:45.117 02:47:55 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:45.117 02:47:55 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109757' 00:06:45.117 killing process with pid 109757 00:06:45.117 02:47:55 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 109757 00:06:45.117 02:47:55 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 109757 00:06:45.684 02:47:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:45.684 02:47:56 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:45.684 02:47:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 109745 ]] 00:06:45.684 02:47:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 109745 00:06:45.684 02:47:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109745 ']' 00:06:45.684 02:47:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109745 00:06:45.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (109745) - No such process 00:06:45.684 02:47:56 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 109745 is not found' 00:06:45.684 Process with pid 109745 is not found 00:06:45.684 02:47:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 109757 ]] 00:06:45.684 02:47:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 109757 00:06:45.684 02:47:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109757 ']' 00:06:45.684 02:47:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109757 00:06:45.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (109757) - No such process 00:06:45.684 02:47:56 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 109757 is not found' 00:06:45.684 Process with pid 109757 is not found 00:06:45.684 02:47:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:45.684 00:06:45.684 real 0m14.745s 00:06:45.684 user 0m27.167s 00:06:45.684 sys 0m5.161s 00:06:45.684 02:47:56 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.684 02:47:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.684 ************************************ 00:06:45.684 END TEST cpu_locks 00:06:45.684 ************************************ 00:06:45.684 00:06:45.684 real 0m39.312s 00:06:45.684 user 1m17.781s 00:06:45.684 sys 0m9.274s 00:06:45.684 02:47:56 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.684 02:47:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.684 ************************************ 00:06:45.684 END TEST event 00:06:45.684 ************************************ 00:06:45.684 02:47:56 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:45.684 02:47:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.684 02:47:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.684 02:47:56 -- common/autotest_common.sh@10 -- # set +x 00:06:45.684 ************************************ 00:06:45.684 START TEST thread 00:06:45.684 ************************************ 00:06:45.684 02:47:56 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:45.684 * Looking for test storage... 00:06:45.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:45.684 02:47:56 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.684 02:47:56 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.684 02:47:56 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.684 02:47:56 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.684 02:47:56 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.684 02:47:56 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.684 02:47:56 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.684 02:47:56 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.684 02:47:56 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.684 02:47:56 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.684 02:47:56 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.684 02:47:56 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.684 02:47:56 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.684 02:47:56 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.684 02:47:56 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.684 02:47:56 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:45.684 02:47:56 thread -- scripts/common.sh@345 -- # : 1 00:06:45.684 02:47:56 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.684 02:47:56 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.684 02:47:56 thread -- scripts/common.sh@365 -- # decimal 1 00:06:45.684 02:47:56 thread -- scripts/common.sh@353 -- # local d=1 00:06:45.684 02:47:56 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.684 02:47:56 thread -- scripts/common.sh@355 -- # echo 1 00:06:45.684 02:47:56 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.684 02:47:56 thread -- scripts/common.sh@366 -- # decimal 2 00:06:45.684 02:47:56 thread -- scripts/common.sh@353 -- # local d=2 00:06:45.684 02:47:56 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.684 02:47:56 thread -- scripts/common.sh@355 -- # echo 2 00:06:45.684 02:47:56 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.684 02:47:56 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.684 02:47:56 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.684 02:47:56 thread -- scripts/common.sh@368 -- # return 0 00:06:45.684 02:47:56 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.684 02:47:56 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.684 --rc genhtml_branch_coverage=1 00:06:45.684 --rc genhtml_function_coverage=1 00:06:45.684 --rc genhtml_legend=1 00:06:45.684 --rc geninfo_all_blocks=1 00:06:45.684 --rc geninfo_unexecuted_blocks=1 00:06:45.684 00:06:45.684 ' 00:06:45.684 02:47:56 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.684 --rc genhtml_branch_coverage=1 00:06:45.684 --rc genhtml_function_coverage=1 00:06:45.684 --rc genhtml_legend=1 00:06:45.684 --rc geninfo_all_blocks=1 00:06:45.684 --rc geninfo_unexecuted_blocks=1 00:06:45.684 00:06:45.684 ' 00:06:45.684 02:47:56 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.684 --rc genhtml_branch_coverage=1 00:06:45.684 --rc genhtml_function_coverage=1 00:06:45.684 --rc genhtml_legend=1 00:06:45.684 --rc geninfo_all_blocks=1 00:06:45.684 --rc geninfo_unexecuted_blocks=1 00:06:45.684 00:06:45.684 ' 00:06:45.684 02:47:56 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.684 --rc genhtml_branch_coverage=1 00:06:45.684 --rc genhtml_function_coverage=1 00:06:45.684 --rc genhtml_legend=1 00:06:45.684 --rc geninfo_all_blocks=1 00:06:45.685 --rc geninfo_unexecuted_blocks=1 00:06:45.685 00:06:45.685 ' 00:06:45.685 02:47:56 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.685 02:47:56 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:45.685 02:47:56 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.685 02:47:56 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.943 ************************************ 00:06:45.943 START TEST thread_poller_perf 00:06:45.943 ************************************ 00:06:45.943 02:47:56 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.943 [2024-11-19 02:47:56.322185] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:45.943 [2024-11-19 02:47:56.322253] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110255 ] 00:06:45.943 [2024-11-19 02:47:56.390636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.943 [2024-11-19 02:47:56.438804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.943 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:46.879 [2024-11-19T01:47:57.494Z] ====================================== 00:06:46.879 [2024-11-19T01:47:57.494Z] busy:2706440073 (cyc) 00:06:46.879 [2024-11-19T01:47:57.494Z] total_run_count: 364000 00:06:46.879 [2024-11-19T01:47:57.494Z] tsc_hz: 2700000000 (cyc) 00:06:46.879 [2024-11-19T01:47:57.494Z] ====================================== 00:06:46.879 [2024-11-19T01:47:57.494Z] poller_cost: 7435 (cyc), 2753 (nsec) 00:06:46.879 00:06:46.879 real 0m1.180s 00:06:46.879 user 0m1.103s 00:06:46.879 sys 0m0.072s 00:06:46.879 02:47:57 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.879 02:47:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.879 ************************************ 00:06:46.879 END TEST thread_poller_perf 00:06:46.879 ************************************ 00:06:47.138 02:47:57 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:47.138 02:47:57 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:47.138 02:47:57 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.138 02:47:57 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.138 ************************************ 00:06:47.138 START TEST thread_poller_perf 00:06:47.138 ************************************ 00:06:47.138 02:47:57 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:47.138 [2024-11-19 02:47:57.554368] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:47.138 [2024-11-19 02:47:57.554437] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110407 ] 00:06:47.138 [2024-11-19 02:47:57.620261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.138 [2024-11-19 02:47:57.662820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.138 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:48.515 [2024-11-19T01:47:59.130Z] ====================================== 00:06:48.515 [2024-11-19T01:47:59.130Z] busy:2702515611 (cyc) 00:06:48.515 [2024-11-19T01:47:59.130Z] total_run_count: 4852000 00:06:48.515 [2024-11-19T01:47:59.130Z] tsc_hz: 2700000000 (cyc) 00:06:48.515 [2024-11-19T01:47:59.130Z] ====================================== 00:06:48.515 [2024-11-19T01:47:59.130Z] poller_cost: 556 (cyc), 205 (nsec) 00:06:48.515 00:06:48.515 real 0m1.169s 00:06:48.515 user 0m1.096s 00:06:48.515 sys 0m0.068s 00:06:48.515 02:47:58 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.515 02:47:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.515 ************************************ 00:06:48.515 END TEST thread_poller_perf 00:06:48.515 ************************************ 00:06:48.515 02:47:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:48.515 00:06:48.515 real 0m2.594s 00:06:48.515 user 0m2.341s 00:06:48.515 sys 0m0.257s 00:06:48.515 02:47:58 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.515 02:47:58 thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.515 ************************************ 00:06:48.515 END TEST thread 00:06:48.515 ************************************ 00:06:48.515 02:47:58 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:48.515 02:47:58 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:48.515 02:47:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.515 02:47:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.515 02:47:58 -- common/autotest_common.sh@10 -- # set +x 00:06:48.515 ************************************ 00:06:48.515 START TEST app_cmdline 00:06:48.515 ************************************ 00:06:48.515 02:47:58 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:48.515 * Looking for test storage... 00:06:48.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:48.515 02:47:58 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.515 02:47:58 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.515 02:47:58 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.515 02:47:58 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.515 02:47:58 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:48.515 02:47:58 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.515 02:47:58 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.515 --rc genhtml_branch_coverage=1 00:06:48.515 --rc genhtml_function_coverage=1 00:06:48.515 --rc genhtml_legend=1 00:06:48.515 --rc geninfo_all_blocks=1 00:06:48.515 --rc geninfo_unexecuted_blocks=1 00:06:48.515 00:06:48.515 ' 00:06:48.515 02:47:58 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.515 --rc genhtml_branch_coverage=1 00:06:48.515 --rc genhtml_function_coverage=1 00:06:48.515 --rc genhtml_legend=1 00:06:48.515 --rc geninfo_all_blocks=1 00:06:48.515 --rc geninfo_unexecuted_blocks=1 00:06:48.515 00:06:48.515 ' 00:06:48.515 02:47:58 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.515 --rc genhtml_branch_coverage=1 00:06:48.515 --rc genhtml_function_coverage=1 00:06:48.515 --rc genhtml_legend=1 00:06:48.515 --rc geninfo_all_blocks=1 00:06:48.515 --rc geninfo_unexecuted_blocks=1 00:06:48.515 00:06:48.515 ' 00:06:48.515 02:47:58 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.515 --rc genhtml_branch_coverage=1 00:06:48.515 --rc genhtml_function_coverage=1 00:06:48.515 --rc genhtml_legend=1 00:06:48.515 --rc geninfo_all_blocks=1 00:06:48.515 --rc geninfo_unexecuted_blocks=1 00:06:48.515 00:06:48.515 ' 00:06:48.515 02:47:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:48.515 02:47:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=110613 00:06:48.515 02:47:58 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:48.515 02:47:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 110613 00:06:48.515 02:47:58 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 110613 ']' 00:06:48.515 02:47:58 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.515 02:47:58 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.515 02:47:58 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.515 02:47:58 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.515 02:47:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.515 [2024-11-19 02:47:58.981075] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:48.515 [2024-11-19 02:47:58.981175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110613 ] 00:06:48.515 [2024-11-19 02:47:59.046335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.515 [2024-11-19 02:47:59.092507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.774 02:47:59 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.775 02:47:59 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:48.775 02:47:59 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:49.032 { 00:06:49.032 "version": "SPDK v25.01-pre git sha1 d47eb51c9", 00:06:49.032 "fields": { 00:06:49.032 "major": 25, 00:06:49.032 "minor": 1, 00:06:49.032 "patch": 0, 00:06:49.032 "suffix": "-pre", 00:06:49.032 "commit": "d47eb51c9" 00:06:49.032 } 00:06:49.032 } 00:06:49.032 02:47:59 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:49.032 02:47:59 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:49.032 02:47:59 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:49.032 02:47:59 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:49.032 02:47:59 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:49.032 02:47:59 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.032 02:47:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.032 02:47:59 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:49.032 02:47:59 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:49.032 02:47:59 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.291 02:47:59 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:49.291 02:47:59 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:49.291 02:47:59 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.291 02:47:59 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:49.291 02:47:59 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.291 02:47:59 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:49.291 02:47:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.291 02:47:59 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:49.291 02:47:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.291 02:47:59 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:49.291 02:47:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.291 02:47:59 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:49.291 02:47:59 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:49.291 02:47:59 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.549 request: 00:06:49.549 { 00:06:49.549 "method": "env_dpdk_get_mem_stats", 00:06:49.549 "req_id": 1 00:06:49.549 } 00:06:49.549 Got JSON-RPC error response 00:06:49.549 response: 00:06:49.549 { 00:06:49.549 "code": -32601, 00:06:49.549 "message": "Method not found" 00:06:49.549 } 00:06:49.550 02:47:59 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:49.550 02:47:59 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.550 02:47:59 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:49.550 02:47:59 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.550 02:47:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 110613 00:06:49.550 02:47:59 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 110613 ']' 00:06:49.550 02:47:59 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 110613 00:06:49.550 02:47:59 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:49.550 02:47:59 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.550 02:47:59 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110613 00:06:49.550 02:47:59 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.550 02:47:59 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.550 02:47:59 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110613' 00:06:49.550 killing process with pid 110613 00:06:49.550 02:47:59 app_cmdline -- common/autotest_common.sh@973 -- # kill 110613 00:06:49.550 02:47:59 app_cmdline -- common/autotest_common.sh@978 -- # wait 110613 00:06:49.809 00:06:49.809 real 0m1.555s 00:06:49.809 user 0m1.944s 00:06:49.809 sys 0m0.478s 00:06:49.809 02:48:00 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.809 02:48:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.809 ************************************ 00:06:49.809 END TEST app_cmdline 00:06:49.809 ************************************ 00:06:49.809 02:48:00 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:49.809 02:48:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.809 02:48:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.809 02:48:00 -- common/autotest_common.sh@10 -- # set +x 00:06:49.809 ************************************ 00:06:49.809 START TEST version 00:06:49.809 ************************************ 00:06:49.809 02:48:00 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:50.069 * Looking for test storage... 00:06:50.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:50.069 02:48:00 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.069 02:48:00 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.069 02:48:00 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.069 02:48:00 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.069 02:48:00 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.069 02:48:00 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.069 02:48:00 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.069 02:48:00 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.069 02:48:00 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.069 02:48:00 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.069 02:48:00 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.069 02:48:00 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.069 02:48:00 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.069 02:48:00 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.069 02:48:00 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.069 02:48:00 version -- scripts/common.sh@344 -- # case "$op" in 00:06:50.069 02:48:00 version -- scripts/common.sh@345 -- # : 1 00:06:50.069 02:48:00 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.069 02:48:00 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.069 02:48:00 version -- scripts/common.sh@365 -- # decimal 1 00:06:50.069 02:48:00 version -- scripts/common.sh@353 -- # local d=1 00:06:50.069 02:48:00 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.069 02:48:00 version -- scripts/common.sh@355 -- # echo 1 00:06:50.069 02:48:00 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.069 02:48:00 version -- scripts/common.sh@366 -- # decimal 2 00:06:50.069 02:48:00 version -- scripts/common.sh@353 -- # local d=2 00:06:50.069 02:48:00 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.069 02:48:00 version -- scripts/common.sh@355 -- # echo 2 00:06:50.069 02:48:00 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.069 02:48:00 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.069 02:48:00 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.069 02:48:00 version -- scripts/common.sh@368 -- # return 0 00:06:50.069 02:48:00 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.069 02:48:00 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.069 --rc genhtml_branch_coverage=1 00:06:50.069 --rc genhtml_function_coverage=1 00:06:50.069 --rc genhtml_legend=1 00:06:50.069 --rc geninfo_all_blocks=1 00:06:50.069 --rc geninfo_unexecuted_blocks=1 00:06:50.069 00:06:50.069 ' 00:06:50.069 02:48:00 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.069 --rc genhtml_branch_coverage=1 00:06:50.069 --rc genhtml_function_coverage=1 00:06:50.069 --rc genhtml_legend=1 00:06:50.069 --rc geninfo_all_blocks=1 00:06:50.069 --rc geninfo_unexecuted_blocks=1 00:06:50.069 00:06:50.069 ' 00:06:50.069 02:48:00 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.069 --rc genhtml_branch_coverage=1 00:06:50.069 --rc genhtml_function_coverage=1 00:06:50.069 --rc genhtml_legend=1 00:06:50.069 --rc geninfo_all_blocks=1 00:06:50.069 --rc geninfo_unexecuted_blocks=1 00:06:50.069 00:06:50.069 ' 00:06:50.069 02:48:00 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.069 --rc genhtml_branch_coverage=1 00:06:50.069 --rc genhtml_function_coverage=1 00:06:50.069 --rc genhtml_legend=1 00:06:50.069 --rc geninfo_all_blocks=1 00:06:50.069 --rc geninfo_unexecuted_blocks=1 00:06:50.069 00:06:50.069 ' 00:06:50.069 02:48:00 version -- app/version.sh@17 -- # get_header_version major 00:06:50.069 02:48:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:50.069 02:48:00 version -- app/version.sh@14 -- # cut -f2 00:06:50.069 02:48:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.069 02:48:00 version -- app/version.sh@17 -- # major=25 00:06:50.069 02:48:00 version -- app/version.sh@18 -- # get_header_version minor 00:06:50.069 02:48:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:50.069 02:48:00 version -- app/version.sh@14 -- # cut -f2 00:06:50.069 02:48:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.069 02:48:00 version -- app/version.sh@18 -- # minor=1 00:06:50.069 02:48:00 version -- app/version.sh@19 -- # get_header_version patch 00:06:50.069 02:48:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:50.069 02:48:00 version -- app/version.sh@14 -- # cut -f2 00:06:50.069 02:48:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.069 02:48:00 version -- app/version.sh@19 -- # patch=0 00:06:50.069 02:48:00 version -- app/version.sh@20 -- # get_header_version suffix 00:06:50.069 02:48:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:50.069 02:48:00 version -- app/version.sh@14 -- # cut -f2 00:06:50.069 02:48:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.069 02:48:00 version -- app/version.sh@20 -- # suffix=-pre 00:06:50.069 02:48:00 version -- app/version.sh@22 -- # version=25.1 00:06:50.069 02:48:00 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:50.069 02:48:00 version -- app/version.sh@28 -- # version=25.1rc0 00:06:50.069 02:48:00 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:50.069 02:48:00 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:50.069 02:48:00 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:50.069 02:48:00 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:50.069 00:06:50.069 real 0m0.208s 00:06:50.069 user 0m0.127s 00:06:50.069 sys 0m0.106s 00:06:50.069 02:48:00 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.069 02:48:00 version -- common/autotest_common.sh@10 -- # set +x 00:06:50.069 ************************************ 00:06:50.069 END TEST version 00:06:50.069 ************************************ 00:06:50.069 02:48:00 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:50.069 02:48:00 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:50.069 02:48:00 -- spdk/autotest.sh@194 -- # uname -s 00:06:50.069 02:48:00 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:50.069 02:48:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:50.069 02:48:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:50.069 02:48:00 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:50.069 02:48:00 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:50.069 02:48:00 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:50.069 02:48:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:50.069 02:48:00 -- common/autotest_common.sh@10 -- # set +x 00:06:50.069 02:48:00 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:50.069 02:48:00 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:50.069 02:48:00 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:50.069 02:48:00 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:50.069 02:48:00 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:50.069 02:48:00 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:50.069 02:48:00 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:50.069 02:48:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:50.069 02:48:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.069 02:48:00 -- common/autotest_common.sh@10 -- # set +x 00:06:50.069 ************************************ 00:06:50.069 START TEST nvmf_tcp 00:06:50.069 ************************************ 00:06:50.069 02:48:00 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:50.329 * Looking for test storage... 00:06:50.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:50.329 02:48:00 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.329 02:48:00 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.329 02:48:00 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.329 02:48:00 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.329 02:48:00 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:50.329 02:48:00 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.329 02:48:00 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.329 --rc genhtml_branch_coverage=1 00:06:50.329 --rc genhtml_function_coverage=1 00:06:50.329 --rc genhtml_legend=1 00:06:50.329 --rc geninfo_all_blocks=1 00:06:50.329 --rc geninfo_unexecuted_blocks=1 00:06:50.329 00:06:50.329 ' 00:06:50.329 02:48:00 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.329 --rc genhtml_branch_coverage=1 00:06:50.329 --rc genhtml_function_coverage=1 00:06:50.329 --rc genhtml_legend=1 00:06:50.329 --rc geninfo_all_blocks=1 00:06:50.329 --rc geninfo_unexecuted_blocks=1 00:06:50.329 00:06:50.329 ' 00:06:50.329 02:48:00 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.329 --rc genhtml_branch_coverage=1 00:06:50.329 --rc genhtml_function_coverage=1 00:06:50.329 --rc genhtml_legend=1 00:06:50.329 --rc geninfo_all_blocks=1 00:06:50.329 --rc geninfo_unexecuted_blocks=1 00:06:50.329 00:06:50.329 ' 00:06:50.329 02:48:00 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.329 --rc genhtml_branch_coverage=1 00:06:50.329 --rc genhtml_function_coverage=1 00:06:50.329 --rc genhtml_legend=1 00:06:50.329 --rc geninfo_all_blocks=1 00:06:50.329 --rc geninfo_unexecuted_blocks=1 00:06:50.329 00:06:50.329 ' 00:06:50.329 02:48:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:50.329 02:48:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:50.329 02:48:00 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:50.329 02:48:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:50.329 02:48:00 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.329 02:48:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.329 ************************************ 00:06:50.329 START TEST nvmf_target_core 00:06:50.329 ************************************ 00:06:50.329 02:48:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:50.329 * Looking for test storage... 00:06:50.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:50.329 02:48:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.329 02:48:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.329 02:48:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.590 --rc genhtml_branch_coverage=1 00:06:50.590 --rc genhtml_function_coverage=1 00:06:50.590 --rc genhtml_legend=1 00:06:50.590 --rc geninfo_all_blocks=1 00:06:50.590 --rc geninfo_unexecuted_blocks=1 00:06:50.590 00:06:50.590 ' 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.590 --rc genhtml_branch_coverage=1 00:06:50.590 --rc genhtml_function_coverage=1 00:06:50.590 --rc genhtml_legend=1 00:06:50.590 --rc geninfo_all_blocks=1 00:06:50.590 --rc geninfo_unexecuted_blocks=1 00:06:50.590 00:06:50.590 ' 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.590 --rc genhtml_branch_coverage=1 00:06:50.590 --rc genhtml_function_coverage=1 00:06:50.590 --rc genhtml_legend=1 00:06:50.590 --rc geninfo_all_blocks=1 00:06:50.590 --rc geninfo_unexecuted_blocks=1 00:06:50.590 00:06:50.590 ' 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.590 --rc genhtml_branch_coverage=1 00:06:50.590 --rc genhtml_function_coverage=1 00:06:50.590 --rc genhtml_legend=1 00:06:50.590 --rc geninfo_all_blocks=1 00:06:50.590 --rc geninfo_unexecuted_blocks=1 00:06:50.590 00:06:50.590 ' 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.590 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:50.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:50.591 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:50.591 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:50.591 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:50.591 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:50.591 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:50.591 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:50.591 02:48:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:50.591 02:48:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:50.591 02:48:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.591 02:48:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:50.591 ************************************ 00:06:50.591 START TEST nvmf_abort 00:06:50.591 ************************************ 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:50.591 * Looking for test storage... 00:06:50.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.591 --rc genhtml_branch_coverage=1 00:06:50.591 --rc genhtml_function_coverage=1 00:06:50.591 --rc genhtml_legend=1 00:06:50.591 --rc geninfo_all_blocks=1 00:06:50.591 --rc geninfo_unexecuted_blocks=1 00:06:50.591 00:06:50.591 ' 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.591 --rc genhtml_branch_coverage=1 00:06:50.591 --rc genhtml_function_coverage=1 00:06:50.591 --rc genhtml_legend=1 00:06:50.591 --rc geninfo_all_blocks=1 00:06:50.591 --rc geninfo_unexecuted_blocks=1 00:06:50.591 00:06:50.591 ' 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.591 --rc genhtml_branch_coverage=1 00:06:50.591 --rc genhtml_function_coverage=1 00:06:50.591 --rc genhtml_legend=1 00:06:50.591 --rc geninfo_all_blocks=1 00:06:50.591 --rc geninfo_unexecuted_blocks=1 00:06:50.591 00:06:50.591 ' 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.591 --rc genhtml_branch_coverage=1 00:06:50.591 --rc genhtml_function_coverage=1 00:06:50.591 --rc genhtml_legend=1 00:06:50.591 --rc geninfo_all_blocks=1 00:06:50.591 --rc geninfo_unexecuted_blocks=1 00:06:50.591 00:06:50.591 ' 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.591 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:50.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:50.592 02:48:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:53.130 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:53.130 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:53.130 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:53.130 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:53.130 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:53.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:06:53.131 00:06:53.131 --- 10.0.0.2 ping statistics --- 00:06:53.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.131 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:06:53.131 00:06:53.131 --- 10.0.0.1 ping statistics --- 00:06:53.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.131 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=112814 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 112814 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 112814 ']' 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.131 [2024-11-19 02:48:03.506566] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:53.131 [2024-11-19 02:48:03.506662] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.131 [2024-11-19 02:48:03.580599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.131 [2024-11-19 02:48:03.626991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.131 [2024-11-19 02:48:03.627059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.131 [2024-11-19 02:48:03.627089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.131 [2024-11-19 02:48:03.627100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.131 [2024-11-19 02:48:03.627110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.131 [2024-11-19 02:48:03.628476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.131 [2024-11-19 02:48:03.628592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.131 [2024-11-19 02:48:03.628588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:53.131 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.390 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.390 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:53.390 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.390 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.391 [2024-11-19 02:48:03.773491] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.391 Malloc0 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.391 Delay0 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.391 [2024-11-19 02:48:03.836934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.391 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:53.391 [2024-11-19 02:48:03.952536] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:55.927 Initializing NVMe Controllers 00:06:55.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:55.927 controller IO queue size 128 less than required 00:06:55.927 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:55.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:55.927 Initialization complete. Launching workers. 00:06:55.927 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 29445 00:06:55.927 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29510, failed to submit 62 00:06:55.927 success 29449, unsuccessful 61, failed 0 00:06:55.927 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:55.927 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.927 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:55.927 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.927 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:55.927 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:55.927 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:55.927 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:55.927 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:55.927 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:55.928 rmmod nvme_tcp 00:06:55.928 rmmod nvme_fabrics 00:06:55.928 rmmod nvme_keyring 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 112814 ']' 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 112814 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 112814 ']' 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 112814 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112814 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112814' 00:06:55.928 killing process with pid 112814 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 112814 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 112814 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.928 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.839 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:57.839 00:06:57.839 real 0m7.375s 00:06:57.839 user 0m10.746s 00:06:57.839 sys 0m2.414s 00:06:57.839 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.839 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:57.839 ************************************ 00:06:57.839 END TEST nvmf_abort 00:06:57.839 ************************************ 00:06:57.839 02:48:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:57.839 02:48:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:57.839 02:48:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.839 02:48:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:57.839 ************************************ 00:06:57.839 START TEST nvmf_ns_hotplug_stress 00:06:57.839 ************************************ 00:06:57.839 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:58.099 * Looking for test storage... 00:06:58.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.099 --rc genhtml_branch_coverage=1 00:06:58.099 --rc genhtml_function_coverage=1 00:06:58.099 --rc genhtml_legend=1 00:06:58.099 --rc geninfo_all_blocks=1 00:06:58.099 --rc geninfo_unexecuted_blocks=1 00:06:58.099 00:06:58.099 ' 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.099 --rc genhtml_branch_coverage=1 00:06:58.099 --rc genhtml_function_coverage=1 00:06:58.099 --rc genhtml_legend=1 00:06:58.099 --rc geninfo_all_blocks=1 00:06:58.099 --rc geninfo_unexecuted_blocks=1 00:06:58.099 00:06:58.099 ' 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.099 --rc genhtml_branch_coverage=1 00:06:58.099 --rc genhtml_function_coverage=1 00:06:58.099 --rc genhtml_legend=1 00:06:58.099 --rc geninfo_all_blocks=1 00:06:58.099 --rc geninfo_unexecuted_blocks=1 00:06:58.099 00:06:58.099 ' 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.099 --rc genhtml_branch_coverage=1 00:06:58.099 --rc genhtml_function_coverage=1 00:06:58.099 --rc genhtml_legend=1 00:06:58.099 --rc geninfo_all_blocks=1 00:06:58.099 --rc geninfo_unexecuted_blocks=1 00:06:58.099 00:06:58.099 ' 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.099 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:58.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:58.100 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:00.637 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:00.637 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:00.638 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:00.638 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:00.638 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:00.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:07:00.638 00:07:00.638 --- 10.0.0.2 ping statistics --- 00:07:00.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.638 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:07:00.638 00:07:00.638 --- 10.0.0.1 ping statistics --- 00:07:00.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.638 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=115679 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 115679 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 115679 ']' 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.638 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:00.638 [2024-11-19 02:48:10.877514] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:00.638 [2024-11-19 02:48:10.877614] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.638 [2024-11-19 02:48:10.950120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.638 [2024-11-19 02:48:10.998517] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:00.638 [2024-11-19 02:48:10.998576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:00.638 [2024-11-19 02:48:10.998604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:00.638 [2024-11-19 02:48:10.998615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:00.638 [2024-11-19 02:48:10.998625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:00.638 [2024-11-19 02:48:11.000168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.638 [2024-11-19 02:48:11.000230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.638 [2024-11-19 02:48:11.000227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.638 02:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.638 02:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:00.639 02:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:00.639 02:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:00.639 02:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:00.639 02:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:00.639 02:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:00.639 02:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:00.897 [2024-11-19 02:48:11.389601] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.897 02:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:01.156 02:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.414 [2024-11-19 02:48:11.940166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.414 02:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:01.672 02:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:01.930 Malloc0 00:07:01.930 02:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:02.188 Delay0 00:07:02.188 02:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.446 02:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:02.703 NULL1 00:07:02.703 02:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:03.269 02:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=115979 00:07:03.269 02:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:03.269 02:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:03.269 02:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.204 Read completed with error (sct=0, sc=11) 00:07:04.204 02:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.462 02:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:04.462 02:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:05.027 true 00:07:05.027 02:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:05.027 02:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.594 02:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.851 02:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:05.851 02:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:06.109 true 00:07:06.109 02:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:06.109 02:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.367 02:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.625 02:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:06.625 02:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:06.883 true 00:07:06.883 02:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:06.883 02:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.141 02:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.399 02:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:07.399 02:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:07.657 true 00:07:07.657 02:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:07.657 02:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.030 02:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.030 02:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:09.030 02:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:09.289 true 00:07:09.289 02:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:09.289 02:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.547 02:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.806 02:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:09.806 02:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:10.066 true 00:07:10.066 02:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:10.066 02:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.324 02:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.582 02:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:10.582 02:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:10.840 true 00:07:10.840 02:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:10.840 02:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.215 02:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.215 02:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:12.215 02:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:12.473 true 00:07:12.473 02:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:12.473 02:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.732 02:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.990 02:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:12.990 02:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:13.248 true 00:07:13.248 02:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:13.248 02:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.507 02:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.766 02:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:13.766 02:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:14.024 true 00:07:14.024 02:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:14.024 02:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.963 02:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.963 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.221 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.221 02:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:15.221 02:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:15.479 true 00:07:15.479 02:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:15.479 02:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.045 02:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.045 02:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:16.045 02:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:16.304 true 00:07:16.304 02:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:16.304 02:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.563 02:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.821 02:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:16.821 02:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:17.387 true 00:07:17.387 02:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:17.387 02:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.321 02:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.580 02:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:18.580 02:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:18.836 true 00:07:18.836 02:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:18.836 02:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.094 02:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.352 02:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:19.352 02:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:19.610 true 00:07:19.610 02:48:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:19.610 02:48:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.868 02:48:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.127 02:48:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:20.127 02:48:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:20.386 true 00:07:20.386 02:48:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:20.386 02:48:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.319 02:48:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.886 02:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:21.886 02:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:21.886 true 00:07:21.886 02:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:21.886 02:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.144 02:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.402 02:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:22.402 02:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:22.659 true 00:07:22.917 02:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:22.917 02:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.175 02:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.434 02:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:23.434 02:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:23.692 true 00:07:23.692 02:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:23.692 02:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.626 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.884 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:24.884 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:25.142 true 00:07:25.142 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:25.142 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.401 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.659 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:25.659 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:25.916 true 00:07:25.916 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:25.916 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.174 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.433 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:26.433 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:26.691 true 00:07:26.691 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:26.691 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.626 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.884 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:27.884 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:28.451 true 00:07:28.451 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:28.451 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.451 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.709 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:28.709 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:28.966 true 00:07:28.966 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:28.966 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.532 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.789 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:29.789 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:30.047 true 00:07:30.047 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:30.047 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.983 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.241 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:31.241 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:31.241 true 00:07:31.499 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:31.499 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.757 02:48:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.015 02:48:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:32.015 02:48:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:32.273 true 00:07:32.273 02:48:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:32.273 02:48:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.208 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.467 Initializing NVMe Controllers 00:07:33.467 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:33.467 Controller IO queue size 128, less than required. 00:07:33.467 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:33.467 Controller IO queue size 128, less than required. 00:07:33.467 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:33.467 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:33.467 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:33.467 Initialization complete. Launching workers. 00:07:33.467 ======================================================== 00:07:33.467 Latency(us) 00:07:33.467 Device Information : IOPS MiB/s Average min max 00:07:33.467 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 619.76 0.30 85036.08 3387.52 1013247.21 00:07:33.467 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8558.22 4.18 14956.09 2363.95 535385.68 00:07:33.467 ======================================================== 00:07:33.467 Total : 9177.98 4.48 19688.35 2363.95 1013247.21 00:07:33.467 00:07:33.467 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:33.467 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:33.726 true 00:07:33.726 02:48:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115979 00:07:33.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (115979) - No such process 00:07:33.726 02:48:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 115979 00:07:33.726 02:48:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.984 02:48:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:34.242 02:48:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:34.242 02:48:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:34.242 02:48:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:34.242 02:48:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:34.242 02:48:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:34.501 null0 00:07:34.501 02:48:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:34.501 02:48:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:34.501 02:48:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:34.760 null1 00:07:34.760 02:48:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:34.760 02:48:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:34.760 02:48:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:35.018 null2 00:07:35.018 02:48:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:35.018 02:48:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:35.018 02:48:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:35.277 null3 00:07:35.277 02:48:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:35.277 02:48:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:35.277 02:48:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:35.535 null4 00:07:35.535 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:35.535 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:35.535 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:35.793 null5 00:07:35.793 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:35.793 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:35.793 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:36.051 null6 00:07:36.051 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:36.051 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.051 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:36.311 null7 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 120042 120043 120045 120047 120049 120051 120053 120055 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.311 02:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.879 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.879 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.879 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.879 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.879 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.879 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.879 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.879 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.137 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:37.395 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.395 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.395 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:37.395 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.395 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.395 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:37.395 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:37.395 02:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.654 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:37.913 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.913 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.913 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:37.913 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.913 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:37.913 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.913 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.913 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.172 02:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:38.430 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:38.430 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:38.430 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:38.430 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.430 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:38.430 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:38.430 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:38.430 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.996 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.254 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.254 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.254 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.254 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.254 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.254 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.254 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.254 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.512 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:39.771 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.771 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.771 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.771 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.771 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.771 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.771 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.771 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.031 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.290 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.290 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.290 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:40.290 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:40.290 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:40.290 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:40.290 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.290 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.858 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.116 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:41.116 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:41.116 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.116 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.116 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.116 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.374 02:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:41.632 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.632 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.632 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.632 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:41.632 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:41.632 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.632 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.632 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:41.890 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.890 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.891 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:42.149 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:42.149 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:42.150 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:42.150 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:42.150 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:42.150 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:42.150 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:42.150 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.410 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.410 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.410 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.410 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.410 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.410 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.410 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.410 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.410 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.411 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.411 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.411 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.411 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.411 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.411 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.411 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.411 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:42.411 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:42.411 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:42.411 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:42.411 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:42.411 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:42.411 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:42.411 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:42.411 rmmod nvme_tcp 00:07:42.668 rmmod nvme_fabrics 00:07:42.668 rmmod nvme_keyring 00:07:42.668 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:42.668 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:42.668 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:42.668 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 115679 ']' 00:07:42.668 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 115679 00:07:42.668 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 115679 ']' 00:07:42.669 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 115679 00:07:42.669 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:42.669 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.669 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115679 00:07:42.669 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:42.669 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:42.669 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115679' 00:07:42.669 killing process with pid 115679 00:07:42.669 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 115679 00:07:42.669 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 115679 00:07:42.928 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:42.928 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:42.928 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:42.928 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:42.928 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:42.928 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:42.928 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:42.928 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:42.928 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:42.928 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.928 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.928 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.841 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:44.841 00:07:44.841 real 0m46.941s 00:07:44.841 user 3m39.238s 00:07:44.841 sys 0m15.610s 00:07:44.841 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.841 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:44.841 ************************************ 00:07:44.841 END TEST nvmf_ns_hotplug_stress 00:07:44.841 ************************************ 00:07:44.841 02:48:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:44.841 02:48:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:44.841 02:48:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.841 02:48:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:44.841 ************************************ 00:07:44.841 START TEST nvmf_delete_subsystem 00:07:44.841 ************************************ 00:07:44.841 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:45.101 * Looking for test storage... 00:07:45.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:45.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.102 --rc genhtml_branch_coverage=1 00:07:45.102 --rc genhtml_function_coverage=1 00:07:45.102 --rc genhtml_legend=1 00:07:45.102 --rc geninfo_all_blocks=1 00:07:45.102 --rc geninfo_unexecuted_blocks=1 00:07:45.102 00:07:45.102 ' 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:45.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.102 --rc genhtml_branch_coverage=1 00:07:45.102 --rc genhtml_function_coverage=1 00:07:45.102 --rc genhtml_legend=1 00:07:45.102 --rc geninfo_all_blocks=1 00:07:45.102 --rc geninfo_unexecuted_blocks=1 00:07:45.102 00:07:45.102 ' 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:45.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.102 --rc genhtml_branch_coverage=1 00:07:45.102 --rc genhtml_function_coverage=1 00:07:45.102 --rc genhtml_legend=1 00:07:45.102 --rc geninfo_all_blocks=1 00:07:45.102 --rc geninfo_unexecuted_blocks=1 00:07:45.102 00:07:45.102 ' 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:45.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.102 --rc genhtml_branch_coverage=1 00:07:45.102 --rc genhtml_function_coverage=1 00:07:45.102 --rc genhtml_legend=1 00:07:45.102 --rc geninfo_all_blocks=1 00:07:45.102 --rc geninfo_unexecuted_blocks=1 00:07:45.102 00:07:45.102 ' 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.102 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:45.103 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:47.645 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:47.645 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:47.645 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.645 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:47.646 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:47.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:07:47.646 00:07:47.646 --- 10.0.0.2 ping statistics --- 00:07:47.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.646 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:47.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:07:47.646 00:07:47.646 --- 10.0.0.1 ping statistics --- 00:07:47.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.646 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=122944 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 122944 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 122944 ']' 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.646 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.646 [2024-11-19 02:48:58.017686] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:47.646 [2024-11-19 02:48:58.017811] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.646 [2024-11-19 02:48:58.089779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:47.646 [2024-11-19 02:48:58.135811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.646 [2024-11-19 02:48:58.135864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.646 [2024-11-19 02:48:58.135892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.646 [2024-11-19 02:48:58.135904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.646 [2024-11-19 02:48:58.135913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.646 [2024-11-19 02:48:58.137294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.646 [2024-11-19 02:48:58.137299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.646 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.646 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:47.646 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:47.646 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.646 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.905 [2024-11-19 02:48:58.284541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.905 [2024-11-19 02:48:58.300778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.905 NULL1 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.905 Delay0 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=122976 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:47.905 02:48:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:47.905 [2024-11-19 02:48:58.385567] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:49.806 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:49.806 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.806 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 starting I/O failed: -6 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 starting I/O failed: -6 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 starting I/O failed: -6 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 starting I/O failed: -6 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 starting I/O failed: -6 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 starting I/O failed: -6 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 starting I/O failed: -6 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 starting I/O failed: -6 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 starting I/O failed: -6 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 starting I/O failed: -6 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 starting I/O failed: -6 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 [2024-11-19 02:49:00.506714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8c810 is same with the state(6) to be set 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Write completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.065 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 starting I/O failed: -6 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 starting I/O failed: -6 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 starting I/O failed: -6 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 starting I/O failed: -6 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 starting I/O failed: -6 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 starting I/O failed: -6 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 starting I/O failed: -6 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 starting I/O failed: -6 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 starting I/O failed: -6 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 starting I/O failed: -6 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 [2024-11-19 02:49:00.507770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0904000c40 is same with the state(6) to be set 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:50.066 Write completed with error (sct=0, sc=8) 00:07:50.066 Read completed with error (sct=0, sc=8) 00:07:51.001 [2024-11-19 02:49:01.479651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9a5b0 is same with the state(6) to be set 00:07:51.001 Write completed with error (sct=0, sc=8) 00:07:51.001 Write completed with error (sct=0, sc=8) 00:07:51.001 Write completed with error (sct=0, sc=8) 00:07:51.001 Read completed with error (sct=0, sc=8) 00:07:51.001 Write completed with error (sct=0, sc=8) 00:07:51.001 Write completed with error (sct=0, sc=8) 00:07:51.001 Read completed with error (sct=0, sc=8) 00:07:51.001 Write completed with error (sct=0, sc=8) 00:07:51.001 Read completed with error (sct=0, sc=8) 00:07:51.001 Read completed with error (sct=0, sc=8) 00:07:51.001 Write completed with error (sct=0, sc=8) 00:07:51.001 Read completed with error (sct=0, sc=8) 00:07:51.001 Read completed with error (sct=0, sc=8) 00:07:51.001 Read completed with error (sct=0, sc=8) 00:07:51.001 Read completed with error (sct=0, sc=8) 00:07:51.001 Write completed with error (sct=0, sc=8) 00:07:51.001 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 [2024-11-19 02:49:01.508851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f090400d020 is same with the state(6) to be set 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 [2024-11-19 02:49:01.509067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f090400d7e0 is same with the state(6) to be set 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 [2024-11-19 02:49:01.510569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cb40 is same with the state(6) to be set 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Write completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 Read completed with error (sct=0, sc=8) 00:07:51.002 [2024-11-19 02:49:01.511191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8c3f0 is same with the state(6) to be set 00:07:51.002 Initializing NVMe Controllers 00:07:51.002 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:51.002 Controller IO queue size 128, less than required. 00:07:51.002 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:51.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:51.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:51.002 Initialization complete. Launching workers. 00:07:51.002 ======================================================== 00:07:51.002 Latency(us) 00:07:51.002 Device Information : IOPS MiB/s Average min max 00:07:51.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.82 0.08 932357.00 497.64 2003834.18 00:07:51.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.86 0.08 951019.06 377.18 2001576.78 00:07:51.002 ======================================================== 00:07:51.002 Total : 333.68 0.16 941521.41 377.18 2003834.18 00:07:51.002 00:07:51.002 [2024-11-19 02:49:01.511676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9a5b0 (9): Bad file descriptor 00:07:51.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:51.002 02:49:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.002 02:49:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:51.002 02:49:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122976 00:07:51.002 02:49:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122976 00:07:51.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (122976) - No such process 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 122976 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 122976 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 122976 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.569 [2024-11-19 02:49:02.034884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=123401 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123401 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:51.569 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:51.569 [2024-11-19 02:49:02.107235] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:52.135 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:52.135 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123401 00:07:52.135 02:49:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:52.701 02:49:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:52.701 02:49:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123401 00:07:52.701 02:49:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:52.959 02:49:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:52.959 02:49:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123401 00:07:52.959 02:49:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:53.525 02:49:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:53.525 02:49:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123401 00:07:53.525 02:49:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:54.091 02:49:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:54.091 02:49:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123401 00:07:54.091 02:49:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:54.656 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:54.656 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123401 00:07:54.656 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:54.914 Initializing NVMe Controllers 00:07:54.914 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:54.914 Controller IO queue size 128, less than required. 00:07:54.914 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:54.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:54.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:54.914 Initialization complete. Launching workers. 00:07:54.914 ======================================================== 00:07:54.914 Latency(us) 00:07:54.914 Device Information : IOPS MiB/s Average min max 00:07:54.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004301.47 1000207.13 1042029.06 00:07:54.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004823.74 1000148.00 1014287.98 00:07:54.914 ======================================================== 00:07:54.914 Total : 256.00 0.12 1004562.61 1000148.00 1042029.06 00:07:54.914 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123401 00:07:55.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (123401) - No such process 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 123401 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:55.173 rmmod nvme_tcp 00:07:55.173 rmmod nvme_fabrics 00:07:55.173 rmmod nvme_keyring 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 122944 ']' 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 122944 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 122944 ']' 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 122944 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122944 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122944' 00:07:55.173 killing process with pid 122944 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 122944 00:07:55.173 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 122944 00:07:55.432 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:55.432 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:55.432 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:55.432 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:55.432 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:55.432 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:55.432 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:55.432 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:55.432 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:55.432 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.432 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.432 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.347 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:57.347 00:07:57.347 real 0m12.485s 00:07:57.347 user 0m27.892s 00:07:57.347 sys 0m3.163s 00:07:57.347 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.347 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.347 ************************************ 00:07:57.347 END TEST nvmf_delete_subsystem 00:07:57.347 ************************************ 00:07:57.347 02:49:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:57.347 02:49:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.347 02:49:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.347 02:49:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.347 ************************************ 00:07:57.347 START TEST nvmf_host_management 00:07:57.347 ************************************ 00:07:57.347 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:57.607 * Looking for test storage... 00:07:57.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:57.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.607 --rc genhtml_branch_coverage=1 00:07:57.607 --rc genhtml_function_coverage=1 00:07:57.607 --rc genhtml_legend=1 00:07:57.607 --rc geninfo_all_blocks=1 00:07:57.607 --rc geninfo_unexecuted_blocks=1 00:07:57.607 00:07:57.607 ' 00:07:57.607 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:57.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.607 --rc genhtml_branch_coverage=1 00:07:57.607 --rc genhtml_function_coverage=1 00:07:57.607 --rc genhtml_legend=1 00:07:57.607 --rc geninfo_all_blocks=1 00:07:57.607 --rc geninfo_unexecuted_blocks=1 00:07:57.608 00:07:57.608 ' 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:57.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.608 --rc genhtml_branch_coverage=1 00:07:57.608 --rc genhtml_function_coverage=1 00:07:57.608 --rc genhtml_legend=1 00:07:57.608 --rc geninfo_all_blocks=1 00:07:57.608 --rc geninfo_unexecuted_blocks=1 00:07:57.608 00:07:57.608 ' 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:57.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.608 --rc genhtml_branch_coverage=1 00:07:57.608 --rc genhtml_function_coverage=1 00:07:57.608 --rc genhtml_legend=1 00:07:57.608 --rc geninfo_all_blocks=1 00:07:57.608 --rc geninfo_unexecuted_blocks=1 00:07:57.608 00:07:57.608 ' 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:57.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:57.608 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:00.145 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:00.145 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:00.145 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:00.145 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:00.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:08:00.145 00:08:00.145 --- 10.0.0.2 ping statistics --- 00:08:00.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.145 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:08:00.145 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:08:00.145 00:08:00.145 --- 10.0.0.1 ping statistics --- 00:08:00.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.145 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=125875 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 125875 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 125875 ']' 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.146 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.146 [2024-11-19 02:49:10.481541] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:00.146 [2024-11-19 02:49:10.481642] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.146 [2024-11-19 02:49:10.560613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.146 [2024-11-19 02:49:10.611089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.146 [2024-11-19 02:49:10.611150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.146 [2024-11-19 02:49:10.611179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.146 [2024-11-19 02:49:10.611190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.146 [2024-11-19 02:49:10.611200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.146 [2024-11-19 02:49:10.612859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.146 [2024-11-19 02:49:10.612928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.146 [2024-11-19 02:49:10.612977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:00.146 [2024-11-19 02:49:10.612980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.404 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.404 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:00.404 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:00.404 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:00.404 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.405 [2024-11-19 02:49:10.798530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.405 Malloc0 00:08:00.405 [2024-11-19 02:49:10.873630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=125924 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 125924 /var/tmp/bdevperf.sock 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 125924 ']' 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:00.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:00.405 { 00:08:00.405 "params": { 00:08:00.405 "name": "Nvme$subsystem", 00:08:00.405 "trtype": "$TEST_TRANSPORT", 00:08:00.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:00.405 "adrfam": "ipv4", 00:08:00.405 "trsvcid": "$NVMF_PORT", 00:08:00.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:00.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:00.405 "hdgst": ${hdgst:-false}, 00:08:00.405 "ddgst": ${ddgst:-false} 00:08:00.405 }, 00:08:00.405 "method": "bdev_nvme_attach_controller" 00:08:00.405 } 00:08:00.405 EOF 00:08:00.405 )") 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:00.405 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:00.405 "params": { 00:08:00.405 "name": "Nvme0", 00:08:00.405 "trtype": "tcp", 00:08:00.405 "traddr": "10.0.0.2", 00:08:00.405 "adrfam": "ipv4", 00:08:00.405 "trsvcid": "4420", 00:08:00.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:00.405 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:00.405 "hdgst": false, 00:08:00.405 "ddgst": false 00:08:00.405 }, 00:08:00.405 "method": "bdev_nvme_attach_controller" 00:08:00.405 }' 00:08:00.405 [2024-11-19 02:49:10.955619] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:00.405 [2024-11-19 02:49:10.955726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125924 ] 00:08:00.664 [2024-11-19 02:49:11.024888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.664 [2024-11-19 02:49:11.071898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.922 Running I/O for 10 seconds... 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:00.922 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:01.183 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:01.183 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:01.183 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:01.183 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:01.183 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.183 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:01.183 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.183 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:01.183 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:01.183 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:01.183 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:01.183 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:01.183 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:01.183 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.183 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:01.183 [2024-11-19 02:49:11.688622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.183 [2024-11-19 02:49:11.688705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.183 [2024-11-19 02:49:11.688737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.183 [2024-11-19 02:49:11.688763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.183 [2024-11-19 02:49:11.688779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.183 [2024-11-19 02:49:11.688794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.183 [2024-11-19 02:49:11.688810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.183 [2024-11-19 02:49:11.688824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.183 [2024-11-19 02:49:11.688838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.183 [2024-11-19 02:49:11.688852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.183 [2024-11-19 02:49:11.688867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.183 [2024-11-19 02:49:11.688881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.183 [2024-11-19 02:49:11.688896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.183 [2024-11-19 02:49:11.688910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.183 [2024-11-19 02:49:11.688936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.183 [2024-11-19 02:49:11.688951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.183 [2024-11-19 02:49:11.688967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.183 [2024-11-19 02:49:11.688981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.183 [2024-11-19 02:49:11.688997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.183 [2024-11-19 02:49:11.689011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.183 [2024-11-19 02:49:11.689026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.183 [2024-11-19 02:49:11.689040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.183 [2024-11-19 02:49:11.689060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.183 [2024-11-19 02:49:11.689074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.183 [2024-11-19 02:49:11.689089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.183 [2024-11-19 02:49:11.689102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.183 [2024-11-19 02:49:11.689118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.183 [2024-11-19 02:49:11.689132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.183 [2024-11-19 02:49:11.689147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.183 [2024-11-19 02:49:11.689161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.183 [2024-11-19 02:49:11.689176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.183 [2024-11-19 02:49:11.689190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.183 [2024-11-19 02:49:11.689205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.183 [2024-11-19 02:49:11.689219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.183 [2024-11-19 02:49:11.689233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.689966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.689981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.690000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.690014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.690028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.690043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.690068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.690085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.690099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.690114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.690128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.690143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.690157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.690172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.690186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.690200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.690214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.690229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.690243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.690258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.690271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.690287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.690301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.690316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.690329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.690344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.690358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.690373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.690387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.184 [2024-11-19 02:49:11.690402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.184 [2024-11-19 02:49:11.690416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.185 [2024-11-19 02:49:11.690434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.185 [2024-11-19 02:49:11.690449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.185 [2024-11-19 02:49:11.690464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.185 [2024-11-19 02:49:11.690478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.185 [2024-11-19 02:49:11.690493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.185 [2024-11-19 02:49:11.690507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.185 [2024-11-19 02:49:11.690522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.185 [2024-11-19 02:49:11.690536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.185 [2024-11-19 02:49:11.690551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.185 [2024-11-19 02:49:11.690565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.185 [2024-11-19 02:49:11.690580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.185 [2024-11-19 02:49:11.690594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.185 [2024-11-19 02:49:11.690609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.185 [2024-11-19 02:49:11.690623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:01.185 [2024-11-19 02:49:11.691876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:01.185 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.185 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:01.185 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.185 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:01.185 task offset: 84736 on job bdev=Nvme0n1 fails 00:08:01.185 00:08:01.185 Latency(us) 00:08:01.185 [2024-11-19T01:49:11.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.185 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:01.185 Job: Nvme0n1 ended in about 0.40 seconds with error 00:08:01.185 Verification LBA range: start 0x0 length 0x400 00:08:01.185 Nvme0n1 : 0.40 1597.70 99.86 159.77 0.00 35353.30 2645.71 34564.17 00:08:01.185 [2024-11-19T01:49:11.800Z] =================================================================================================================== 00:08:01.185 [2024-11-19T01:49:11.800Z] Total : 1597.70 99.86 159.77 0.00 35353.30 2645.71 34564.17 00:08:01.185 [2024-11-19 02:49:11.693836] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:01.185 [2024-11-19 02:49:11.693866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd52d70 (9): Bad file descriptor 00:08:01.185 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.185 02:49:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:01.444 [2024-11-19 02:49:11.825868] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:02.379 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 125924 00:08:02.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (125924) - No such process 00:08:02.379 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:02.379 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:02.379 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:02.379 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:02.379 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:02.379 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:02.379 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:02.379 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:02.379 { 00:08:02.379 "params": { 00:08:02.379 "name": "Nvme$subsystem", 00:08:02.379 "trtype": "$TEST_TRANSPORT", 00:08:02.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:02.379 "adrfam": "ipv4", 00:08:02.379 "trsvcid": "$NVMF_PORT", 00:08:02.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:02.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:02.379 "hdgst": ${hdgst:-false}, 00:08:02.379 "ddgst": ${ddgst:-false} 00:08:02.379 }, 00:08:02.379 "method": "bdev_nvme_attach_controller" 00:08:02.379 } 00:08:02.379 EOF 00:08:02.379 )") 00:08:02.379 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:02.379 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:02.379 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:02.379 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:02.379 "params": { 00:08:02.379 "name": "Nvme0", 00:08:02.379 "trtype": "tcp", 00:08:02.379 "traddr": "10.0.0.2", 00:08:02.379 "adrfam": "ipv4", 00:08:02.379 "trsvcid": "4420", 00:08:02.379 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:02.379 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:02.379 "hdgst": false, 00:08:02.379 "ddgst": false 00:08:02.379 }, 00:08:02.379 "method": "bdev_nvme_attach_controller" 00:08:02.379 }' 00:08:02.379 [2024-11-19 02:49:12.752995] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:02.379 [2024-11-19 02:49:12.753076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126200 ] 00:08:02.379 [2024-11-19 02:49:12.821770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.379 [2024-11-19 02:49:12.870076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.638 Running I/O for 1 seconds... 00:08:03.574 1664.00 IOPS, 104.00 MiB/s 00:08:03.574 Latency(us) 00:08:03.574 [2024-11-19T01:49:14.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.574 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:03.574 Verification LBA range: start 0x0 length 0x400 00:08:03.574 Nvme0n1 : 1.03 1681.09 105.07 0.00 0.00 37457.30 7136.14 33010.73 00:08:03.574 [2024-11-19T01:49:14.189Z] =================================================================================================================== 00:08:03.574 [2024-11-19T01:49:14.189Z] Total : 1681.09 105.07 0.00 0.00 37457.30 7136.14 33010.73 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:03.833 rmmod nvme_tcp 00:08:03.833 rmmod nvme_fabrics 00:08:03.833 rmmod nvme_keyring 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 125875 ']' 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 125875 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 125875 ']' 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 125875 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125875 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125875' 00:08:03.833 killing process with pid 125875 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 125875 00:08:03.833 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 125875 00:08:04.094 [2024-11-19 02:49:14.605119] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:04.094 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:04.094 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:04.094 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:04.094 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:04.094 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:04.094 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:04.094 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:04.094 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:04.094 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:04.094 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.094 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.094 02:49:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:06.642 00:08:06.642 real 0m8.724s 00:08:06.642 user 0m19.210s 00:08:06.642 sys 0m2.794s 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.642 ************************************ 00:08:06.642 END TEST nvmf_host_management 00:08:06.642 ************************************ 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:06.642 ************************************ 00:08:06.642 START TEST nvmf_lvol 00:08:06.642 ************************************ 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:06.642 * Looking for test storage... 00:08:06.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.642 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:06.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.643 --rc genhtml_branch_coverage=1 00:08:06.643 --rc genhtml_function_coverage=1 00:08:06.643 --rc genhtml_legend=1 00:08:06.643 --rc geninfo_all_blocks=1 00:08:06.643 --rc geninfo_unexecuted_blocks=1 00:08:06.643 00:08:06.643 ' 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:06.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.643 --rc genhtml_branch_coverage=1 00:08:06.643 --rc genhtml_function_coverage=1 00:08:06.643 --rc genhtml_legend=1 00:08:06.643 --rc geninfo_all_blocks=1 00:08:06.643 --rc geninfo_unexecuted_blocks=1 00:08:06.643 00:08:06.643 ' 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:06.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.643 --rc genhtml_branch_coverage=1 00:08:06.643 --rc genhtml_function_coverage=1 00:08:06.643 --rc genhtml_legend=1 00:08:06.643 --rc geninfo_all_blocks=1 00:08:06.643 --rc geninfo_unexecuted_blocks=1 00:08:06.643 00:08:06.643 ' 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:06.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.643 --rc genhtml_branch_coverage=1 00:08:06.643 --rc genhtml_function_coverage=1 00:08:06.643 --rc genhtml_legend=1 00:08:06.643 --rc geninfo_all_blocks=1 00:08:06.643 --rc geninfo_unexecuted_blocks=1 00:08:06.643 00:08:06.643 ' 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:06.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:06.643 02:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:08.550 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:08.551 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:08.551 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:08.551 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:08.551 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.551 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:08.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:08:08.810 00:08:08.810 --- 10.0.0.2 ping statistics --- 00:08:08.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.810 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:08:08.810 00:08:08.810 --- 10.0.0.1 ping statistics --- 00:08:08.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.810 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=128337 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 128337 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 128337 ']' 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.810 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:08.810 [2024-11-19 02:49:19.338591] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:08.810 [2024-11-19 02:49:19.338666] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.810 [2024-11-19 02:49:19.409730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:09.069 [2024-11-19 02:49:19.455742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.069 [2024-11-19 02:49:19.455794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.069 [2024-11-19 02:49:19.455816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.069 [2024-11-19 02:49:19.455827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.069 [2024-11-19 02:49:19.455836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.069 [2024-11-19 02:49:19.457216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.069 [2024-11-19 02:49:19.457274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.069 [2024-11-19 02:49:19.457277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.069 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.069 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:09.069 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:09.069 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:09.069 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:09.069 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.069 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:09.328 [2024-11-19 02:49:19.838323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.328 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:09.586 02:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:09.586 02:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:09.846 02:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:09.846 02:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:10.413 02:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:10.413 02:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=59851ac0-8bc8-48c7-86e8-37645914e89d 00:08:10.413 02:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 59851ac0-8bc8-48c7-86e8-37645914e89d lvol 20 00:08:10.671 02:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b84347af-5ab2-4f10-bbc3-97acf3db6587 00:08:10.671 02:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:11.237 02:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b84347af-5ab2-4f10-bbc3-97acf3db6587 00:08:11.237 02:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:11.495 [2024-11-19 02:49:22.067366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.495 02:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:11.753 02:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=128722 00:08:11.753 02:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:11.753 02:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:13.128 02:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b84347af-5ab2-4f10-bbc3-97acf3db6587 MY_SNAPSHOT 00:08:13.128 02:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a7782569-e1b5-42d6-adba-d60b3404d3c8 00:08:13.128 02:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b84347af-5ab2-4f10-bbc3-97acf3db6587 30 00:08:13.695 02:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a7782569-e1b5-42d6-adba-d60b3404d3c8 MY_CLONE 00:08:13.954 02:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=eac04746-a9c2-4048-aee9-b57c40500a6f 00:08:13.954 02:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate eac04746-a9c2-4048-aee9-b57c40500a6f 00:08:14.521 02:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 128722 00:08:22.640 Initializing NVMe Controllers 00:08:22.640 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:22.640 Controller IO queue size 128, less than required. 00:08:22.640 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:22.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:22.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:22.640 Initialization complete. Launching workers. 00:08:22.640 ======================================================== 00:08:22.640 Latency(us) 00:08:22.640 Device Information : IOPS MiB/s Average min max 00:08:22.640 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10545.30 41.19 12146.25 317.81 68599.77 00:08:22.640 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10431.90 40.75 12273.33 2389.07 77499.36 00:08:22.640 ======================================================== 00:08:22.640 Total : 20977.20 81.94 12209.45 317.81 77499.36 00:08:22.640 00:08:22.640 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:22.640 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b84347af-5ab2-4f10-bbc3-97acf3db6587 00:08:22.899 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 59851ac0-8bc8-48c7-86e8-37645914e89d 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:23.157 rmmod nvme_tcp 00:08:23.157 rmmod nvme_fabrics 00:08:23.157 rmmod nvme_keyring 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 128337 ']' 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 128337 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 128337 ']' 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 128337 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 128337 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 128337' 00:08:23.157 killing process with pid 128337 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 128337 00:08:23.157 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 128337 00:08:23.418 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:23.418 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:23.418 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:23.418 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:23.418 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:23.418 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:23.418 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:23.418 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:23.418 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:23.418 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.418 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.418 02:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.331 02:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:25.591 00:08:25.591 real 0m19.214s 00:08:25.591 user 1m5.398s 00:08:25.591 sys 0m5.599s 00:08:25.591 02:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.591 02:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:25.591 ************************************ 00:08:25.591 END TEST nvmf_lvol 00:08:25.591 ************************************ 00:08:25.591 02:49:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:25.591 02:49:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:25.591 02:49:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.591 02:49:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:25.591 ************************************ 00:08:25.591 START TEST nvmf_lvs_grow 00:08:25.591 ************************************ 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:25.591 * Looking for test storage... 00:08:25.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.591 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:25.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.591 --rc genhtml_branch_coverage=1 00:08:25.592 --rc genhtml_function_coverage=1 00:08:25.592 --rc genhtml_legend=1 00:08:25.592 --rc geninfo_all_blocks=1 00:08:25.592 --rc geninfo_unexecuted_blocks=1 00:08:25.592 00:08:25.592 ' 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:25.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.592 --rc genhtml_branch_coverage=1 00:08:25.592 --rc genhtml_function_coverage=1 00:08:25.592 --rc genhtml_legend=1 00:08:25.592 --rc geninfo_all_blocks=1 00:08:25.592 --rc geninfo_unexecuted_blocks=1 00:08:25.592 00:08:25.592 ' 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:25.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.592 --rc genhtml_branch_coverage=1 00:08:25.592 --rc genhtml_function_coverage=1 00:08:25.592 --rc genhtml_legend=1 00:08:25.592 --rc geninfo_all_blocks=1 00:08:25.592 --rc geninfo_unexecuted_blocks=1 00:08:25.592 00:08:25.592 ' 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:25.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.592 --rc genhtml_branch_coverage=1 00:08:25.592 --rc genhtml_function_coverage=1 00:08:25.592 --rc genhtml_legend=1 00:08:25.592 --rc geninfo_all_blocks=1 00:08:25.592 --rc geninfo_unexecuted_blocks=1 00:08:25.592 00:08:25.592 ' 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:25.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:25.592 02:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:28.136 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:28.136 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:28.136 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:28.137 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:28.137 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:28.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:08:28.137 00:08:28.137 --- 10.0.0.2 ping statistics --- 00:08:28.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.137 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:08:28.137 00:08:28.137 --- 10.0.0.1 ping statistics --- 00:08:28.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.137 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=132070 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 132070 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 132070 ']' 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.137 [2024-11-19 02:49:38.472738] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:28.137 [2024-11-19 02:49:38.472821] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.137 [2024-11-19 02:49:38.547570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.137 [2024-11-19 02:49:38.592433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.137 [2024-11-19 02:49:38.592492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.137 [2024-11-19 02:49:38.592505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.137 [2024-11-19 02:49:38.592516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.137 [2024-11-19 02:49:38.592535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.137 [2024-11-19 02:49:38.593169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.137 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:28.396 [2024-11-19 02:49:38.973185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.396 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:28.396 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.396 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.396 02:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.654 ************************************ 00:08:28.654 START TEST lvs_grow_clean 00:08:28.654 ************************************ 00:08:28.654 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:28.654 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:28.654 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:28.654 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:28.654 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:28.654 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:28.654 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:28.654 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:28.654 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:28.654 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:28.912 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:28.912 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:29.170 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=31a30547-29c8-471c-a1c3-174796b41eee 00:08:29.170 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31a30547-29c8-471c-a1c3-174796b41eee 00:08:29.170 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:29.429 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:29.429 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:29.429 02:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 31a30547-29c8-471c-a1c3-174796b41eee lvol 150 00:08:29.687 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f3b72780-67c7-4e92-b1e5-b12aaa081d46 00:08:29.687 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:29.687 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:29.946 [2024-11-19 02:49:40.399240] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:29.946 [2024-11-19 02:49:40.399352] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:29.946 true 00:08:29.946 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31a30547-29c8-471c-a1c3-174796b41eee 00:08:29.946 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:30.204 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:30.205 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:30.463 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f3b72780-67c7-4e92-b1e5-b12aaa081d46 00:08:30.722 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:30.980 [2024-11-19 02:49:41.494578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.980 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.240 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=132468 00:08:31.240 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:31.240 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:31.240 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 132468 /var/tmp/bdevperf.sock 00:08:31.240 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 132468 ']' 00:08:31.240 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:31.240 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.240 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:31.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:31.240 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.240 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:31.240 [2024-11-19 02:49:41.821822] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:31.240 [2024-11-19 02:49:41.821914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132468 ] 00:08:31.499 [2024-11-19 02:49:41.891299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.499 [2024-11-19 02:49:41.939598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.499 02:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.499 02:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:31.499 02:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:32.067 Nvme0n1 00:08:32.067 02:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:32.326 [ 00:08:32.326 { 00:08:32.326 "name": "Nvme0n1", 00:08:32.326 "aliases": [ 00:08:32.326 "f3b72780-67c7-4e92-b1e5-b12aaa081d46" 00:08:32.326 ], 00:08:32.326 "product_name": "NVMe disk", 00:08:32.326 "block_size": 4096, 00:08:32.326 "num_blocks": 38912, 00:08:32.326 "uuid": "f3b72780-67c7-4e92-b1e5-b12aaa081d46", 00:08:32.326 "numa_id": 0, 00:08:32.326 "assigned_rate_limits": { 00:08:32.326 "rw_ios_per_sec": 0, 00:08:32.326 "rw_mbytes_per_sec": 0, 00:08:32.326 "r_mbytes_per_sec": 0, 00:08:32.326 "w_mbytes_per_sec": 0 00:08:32.326 }, 00:08:32.326 "claimed": false, 00:08:32.326 "zoned": false, 00:08:32.326 "supported_io_types": { 00:08:32.326 "read": true, 00:08:32.326 "write": true, 00:08:32.326 "unmap": true, 00:08:32.326 "flush": true, 00:08:32.326 "reset": true, 00:08:32.326 "nvme_admin": true, 00:08:32.326 "nvme_io": true, 00:08:32.326 "nvme_io_md": false, 00:08:32.326 "write_zeroes": true, 00:08:32.326 "zcopy": false, 00:08:32.326 "get_zone_info": false, 00:08:32.326 "zone_management": false, 00:08:32.326 "zone_append": false, 00:08:32.326 "compare": true, 00:08:32.326 "compare_and_write": true, 00:08:32.326 "abort": true, 00:08:32.326 "seek_hole": false, 00:08:32.326 "seek_data": false, 00:08:32.326 "copy": true, 00:08:32.326 "nvme_iov_md": false 00:08:32.326 }, 00:08:32.326 "memory_domains": [ 00:08:32.326 { 00:08:32.326 "dma_device_id": "system", 00:08:32.326 "dma_device_type": 1 00:08:32.326 } 00:08:32.326 ], 00:08:32.326 "driver_specific": { 00:08:32.326 "nvme": [ 00:08:32.326 { 00:08:32.326 "trid": { 00:08:32.326 "trtype": "TCP", 00:08:32.326 "adrfam": "IPv4", 00:08:32.326 "traddr": "10.0.0.2", 00:08:32.326 "trsvcid": "4420", 00:08:32.326 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:32.326 }, 00:08:32.326 "ctrlr_data": { 00:08:32.326 "cntlid": 1, 00:08:32.326 "vendor_id": "0x8086", 00:08:32.326 "model_number": "SPDK bdev Controller", 00:08:32.326 "serial_number": "SPDK0", 00:08:32.326 "firmware_revision": "25.01", 00:08:32.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:32.326 "oacs": { 00:08:32.326 "security": 0, 00:08:32.326 "format": 0, 00:08:32.326 "firmware": 0, 00:08:32.326 "ns_manage": 0 00:08:32.326 }, 00:08:32.326 "multi_ctrlr": true, 00:08:32.326 "ana_reporting": false 00:08:32.326 }, 00:08:32.326 "vs": { 00:08:32.326 "nvme_version": "1.3" 00:08:32.326 }, 00:08:32.326 "ns_data": { 00:08:32.326 "id": 1, 00:08:32.326 "can_share": true 00:08:32.326 } 00:08:32.326 } 00:08:32.326 ], 00:08:32.326 "mp_policy": "active_passive" 00:08:32.326 } 00:08:32.326 } 00:08:32.326 ] 00:08:32.326 02:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=132592 00:08:32.326 02:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:32.326 02:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:32.585 Running I/O for 10 seconds... 00:08:33.521 Latency(us) 00:08:33.521 [2024-11-19T01:49:44.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.521 Nvme0n1 : 1.00 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:08:33.521 [2024-11-19T01:49:44.136Z] =================================================================================================================== 00:08:33.521 [2024-11-19T01:49:44.136Z] Total : 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:08:33.521 00:08:34.458 02:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 31a30547-29c8-471c-a1c3-174796b41eee 00:08:34.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.458 Nvme0n1 : 2.00 15431.00 60.28 0.00 0.00 0.00 0.00 0.00 00:08:34.458 [2024-11-19T01:49:45.073Z] =================================================================================================================== 00:08:34.458 [2024-11-19T01:49:45.073Z] Total : 15431.00 60.28 0.00 0.00 0.00 0.00 0.00 00:08:34.458 00:08:34.716 true 00:08:34.716 02:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31a30547-29c8-471c-a1c3-174796b41eee 00:08:34.717 02:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:34.975 02:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:34.975 02:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:34.975 02:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 132592 00:08:35.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.542 Nvme0n1 : 3.00 15536.67 60.69 0.00 0.00 0.00 0.00 0.00 00:08:35.542 [2024-11-19T01:49:46.157Z] =================================================================================================================== 00:08:35.542 [2024-11-19T01:49:46.157Z] Total : 15536.67 60.69 0.00 0.00 0.00 0.00 0.00 00:08:35.542 00:08:36.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.476 Nvme0n1 : 4.00 15653.00 61.14 0.00 0.00 0.00 0.00 0.00 00:08:36.476 [2024-11-19T01:49:47.091Z] =================================================================================================================== 00:08:36.476 [2024-11-19T01:49:47.091Z] Total : 15653.00 61.14 0.00 0.00 0.00 0.00 0.00 00:08:36.476 00:08:37.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.411 Nvme0n1 : 5.00 15722.80 61.42 0.00 0.00 0.00 0.00 0.00 00:08:37.411 [2024-11-19T01:49:48.026Z] =================================================================================================================== 00:08:37.411 [2024-11-19T01:49:48.026Z] Total : 15722.80 61.42 0.00 0.00 0.00 0.00 0.00 00:08:37.411 00:08:38.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.790 Nvme0n1 : 6.00 15790.50 61.68 0.00 0.00 0.00 0.00 0.00 00:08:38.790 [2024-11-19T01:49:49.405Z] =================================================================================================================== 00:08:38.790 [2024-11-19T01:49:49.405Z] Total : 15790.50 61.68 0.00 0.00 0.00 0.00 0.00 00:08:38.790 00:08:39.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.726 Nvme0n1 : 7.00 15820.71 61.80 0.00 0.00 0.00 0.00 0.00 00:08:39.726 [2024-11-19T01:49:50.341Z] =================================================================================================================== 00:08:39.726 [2024-11-19T01:49:50.341Z] Total : 15820.71 61.80 0.00 0.00 0.00 0.00 0.00 00:08:39.726 00:08:40.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.662 Nvme0n1 : 8.00 15782.38 61.65 0.00 0.00 0.00 0.00 0.00 00:08:40.662 [2024-11-19T01:49:51.277Z] =================================================================================================================== 00:08:40.662 [2024-11-19T01:49:51.277Z] Total : 15782.38 61.65 0.00 0.00 0.00 0.00 0.00 00:08:40.662 00:08:41.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.600 Nvme0n1 : 9.00 15820.89 61.80 0.00 0.00 0.00 0.00 0.00 00:08:41.600 [2024-11-19T01:49:52.215Z] =================================================================================================================== 00:08:41.600 [2024-11-19T01:49:52.215Z] Total : 15820.89 61.80 0.00 0.00 0.00 0.00 0.00 00:08:41.600 00:08:42.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.536 Nvme0n1 : 10.00 15851.70 61.92 0.00 0.00 0.00 0.00 0.00 00:08:42.536 [2024-11-19T01:49:53.151Z] =================================================================================================================== 00:08:42.536 [2024-11-19T01:49:53.151Z] Total : 15851.70 61.92 0.00 0.00 0.00 0.00 0.00 00:08:42.536 00:08:42.536 00:08:42.536 Latency(us) 00:08:42.536 [2024-11-19T01:49:53.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.536 Nvme0n1 : 10.01 15855.25 61.93 0.00 0.00 8068.36 4271.98 17573.36 00:08:42.536 [2024-11-19T01:49:53.151Z] =================================================================================================================== 00:08:42.536 [2024-11-19T01:49:53.151Z] Total : 15855.25 61.93 0.00 0.00 8068.36 4271.98 17573.36 00:08:42.536 { 00:08:42.536 "results": [ 00:08:42.536 { 00:08:42.536 "job": "Nvme0n1", 00:08:42.536 "core_mask": "0x2", 00:08:42.536 "workload": "randwrite", 00:08:42.536 "status": "finished", 00:08:42.536 "queue_depth": 128, 00:08:42.536 "io_size": 4096, 00:08:42.536 "runtime": 10.005831, 00:08:42.536 "iops": 15855.25480092558, 00:08:42.536 "mibps": 61.93458906611555, 00:08:42.536 "io_failed": 0, 00:08:42.536 "io_timeout": 0, 00:08:42.536 "avg_latency_us": 8068.364613188309, 00:08:42.536 "min_latency_us": 4271.976296296296, 00:08:42.536 "max_latency_us": 17573.357037037036 00:08:42.536 } 00:08:42.536 ], 00:08:42.536 "core_count": 1 00:08:42.536 } 00:08:42.536 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 132468 00:08:42.536 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 132468 ']' 00:08:42.536 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 132468 00:08:42.536 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:42.536 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.536 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132468 00:08:42.536 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:42.536 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:42.536 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132468' 00:08:42.536 killing process with pid 132468 00:08:42.536 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 132468 00:08:42.536 Received shutdown signal, test time was about 10.000000 seconds 00:08:42.536 00:08:42.536 Latency(us) 00:08:42.536 [2024-11-19T01:49:53.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.536 [2024-11-19T01:49:53.151Z] =================================================================================================================== 00:08:42.536 [2024-11-19T01:49:53.151Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:42.536 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 132468 00:08:42.794 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:43.053 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:43.311 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31a30547-29c8-471c-a1c3-174796b41eee 00:08:43.311 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:43.570 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:43.570 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:43.570 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:43.828 [2024-11-19 02:49:54.313472] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:43.828 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31a30547-29c8-471c-a1c3-174796b41eee 00:08:43.828 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:43.828 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31a30547-29c8-471c-a1c3-174796b41eee 00:08:43.828 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.828 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.828 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.828 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.828 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.828 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.828 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.828 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:43.829 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31a30547-29c8-471c-a1c3-174796b41eee 00:08:44.088 request: 00:08:44.088 { 00:08:44.088 "uuid": "31a30547-29c8-471c-a1c3-174796b41eee", 00:08:44.088 "method": "bdev_lvol_get_lvstores", 00:08:44.088 "req_id": 1 00:08:44.088 } 00:08:44.088 Got JSON-RPC error response 00:08:44.088 response: 00:08:44.088 { 00:08:44.088 "code": -19, 00:08:44.088 "message": "No such device" 00:08:44.088 } 00:08:44.088 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:44.088 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:44.088 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:44.088 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:44.088 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:44.347 aio_bdev 00:08:44.347 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f3b72780-67c7-4e92-b1e5-b12aaa081d46 00:08:44.347 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f3b72780-67c7-4e92-b1e5-b12aaa081d46 00:08:44.347 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:44.347 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:44.347 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:44.347 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:44.347 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:44.605 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f3b72780-67c7-4e92-b1e5-b12aaa081d46 -t 2000 00:08:44.864 [ 00:08:44.864 { 00:08:44.864 "name": "f3b72780-67c7-4e92-b1e5-b12aaa081d46", 00:08:44.864 "aliases": [ 00:08:44.864 "lvs/lvol" 00:08:44.864 ], 00:08:44.864 "product_name": "Logical Volume", 00:08:44.864 "block_size": 4096, 00:08:44.864 "num_blocks": 38912, 00:08:44.864 "uuid": "f3b72780-67c7-4e92-b1e5-b12aaa081d46", 00:08:44.864 "assigned_rate_limits": { 00:08:44.864 "rw_ios_per_sec": 0, 00:08:44.864 "rw_mbytes_per_sec": 0, 00:08:44.864 "r_mbytes_per_sec": 0, 00:08:44.864 "w_mbytes_per_sec": 0 00:08:44.864 }, 00:08:44.864 "claimed": false, 00:08:44.864 "zoned": false, 00:08:44.864 "supported_io_types": { 00:08:44.864 "read": true, 00:08:44.864 "write": true, 00:08:44.864 "unmap": true, 00:08:44.864 "flush": false, 00:08:44.864 "reset": true, 00:08:44.864 "nvme_admin": false, 00:08:44.864 "nvme_io": false, 00:08:44.864 "nvme_io_md": false, 00:08:44.864 "write_zeroes": true, 00:08:44.864 "zcopy": false, 00:08:44.864 "get_zone_info": false, 00:08:44.864 "zone_management": false, 00:08:44.864 "zone_append": false, 00:08:44.864 "compare": false, 00:08:44.864 "compare_and_write": false, 00:08:44.864 "abort": false, 00:08:44.864 "seek_hole": true, 00:08:44.864 "seek_data": true, 00:08:44.864 "copy": false, 00:08:44.864 "nvme_iov_md": false 00:08:44.864 }, 00:08:44.864 "driver_specific": { 00:08:44.864 "lvol": { 00:08:44.864 "lvol_store_uuid": "31a30547-29c8-471c-a1c3-174796b41eee", 00:08:44.864 "base_bdev": "aio_bdev", 00:08:44.864 "thin_provision": false, 00:08:44.864 "num_allocated_clusters": 38, 00:08:44.864 "snapshot": false, 00:08:44.864 "clone": false, 00:08:44.864 "esnap_clone": false 00:08:44.864 } 00:08:44.864 } 00:08:44.864 } 00:08:44.864 ] 00:08:44.864 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:44.864 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31a30547-29c8-471c-a1c3-174796b41eee 00:08:44.864 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:45.123 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:45.123 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31a30547-29c8-471c-a1c3-174796b41eee 00:08:45.123 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:45.382 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:45.382 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f3b72780-67c7-4e92-b1e5-b12aaa081d46 00:08:45.641 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 31a30547-29c8-471c-a1c3-174796b41eee 00:08:46.210 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:46.210 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:46.210 00:08:46.210 real 0m17.801s 00:08:46.210 user 0m17.170s 00:08:46.210 sys 0m1.965s 00:08:46.210 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.210 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:46.211 ************************************ 00:08:46.211 END TEST lvs_grow_clean 00:08:46.211 ************************************ 00:08:46.469 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:46.469 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:46.469 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.469 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:46.469 ************************************ 00:08:46.469 START TEST lvs_grow_dirty 00:08:46.469 ************************************ 00:08:46.469 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:46.469 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:46.469 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:46.469 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:46.469 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:46.469 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:46.469 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:46.469 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:46.469 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:46.469 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:46.728 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:46.728 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:46.986 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d22317dd-a054-4115-8ece-dac05a78d056 00:08:46.986 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d22317dd-a054-4115-8ece-dac05a78d056 00:08:46.986 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:47.245 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:47.245 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:47.245 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d22317dd-a054-4115-8ece-dac05a78d056 lvol 150 00:08:47.504 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d7ae7bda-4648-4a8b-8574-6a83f1d8c881 00:08:47.504 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:47.504 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:47.762 [2024-11-19 02:49:58.236110] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:47.762 [2024-11-19 02:49:58.236213] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:47.763 true 00:08:47.763 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d22317dd-a054-4115-8ece-dac05a78d056 00:08:47.763 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:48.021 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:48.021 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:48.280 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d7ae7bda-4648-4a8b-8574-6a83f1d8c881 00:08:48.539 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:48.798 [2024-11-19 02:49:59.307270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.798 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:49.056 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=134646 00:08:49.057 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:49.057 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 134646 /var/tmp/bdevperf.sock 00:08:49.057 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 134646 ']' 00:08:49.057 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:49.057 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:49.057 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.057 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:49.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:49.057 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.057 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:49.057 [2024-11-19 02:49:59.644277] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:49.057 [2024-11-19 02:49:59.644351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134646 ] 00:08:49.315 [2024-11-19 02:49:59.711499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.315 [2024-11-19 02:49:59.756504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.315 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.315 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:49.315 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:49.883 Nvme0n1 00:08:49.883 02:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:50.142 [ 00:08:50.142 { 00:08:50.142 "name": "Nvme0n1", 00:08:50.142 "aliases": [ 00:08:50.142 "d7ae7bda-4648-4a8b-8574-6a83f1d8c881" 00:08:50.142 ], 00:08:50.142 "product_name": "NVMe disk", 00:08:50.142 "block_size": 4096, 00:08:50.142 "num_blocks": 38912, 00:08:50.142 "uuid": "d7ae7bda-4648-4a8b-8574-6a83f1d8c881", 00:08:50.142 "numa_id": 0, 00:08:50.142 "assigned_rate_limits": { 00:08:50.142 "rw_ios_per_sec": 0, 00:08:50.142 "rw_mbytes_per_sec": 0, 00:08:50.142 "r_mbytes_per_sec": 0, 00:08:50.142 "w_mbytes_per_sec": 0 00:08:50.142 }, 00:08:50.142 "claimed": false, 00:08:50.142 "zoned": false, 00:08:50.142 "supported_io_types": { 00:08:50.142 "read": true, 00:08:50.142 "write": true, 00:08:50.142 "unmap": true, 00:08:50.142 "flush": true, 00:08:50.142 "reset": true, 00:08:50.142 "nvme_admin": true, 00:08:50.142 "nvme_io": true, 00:08:50.142 "nvme_io_md": false, 00:08:50.142 "write_zeroes": true, 00:08:50.142 "zcopy": false, 00:08:50.142 "get_zone_info": false, 00:08:50.142 "zone_management": false, 00:08:50.142 "zone_append": false, 00:08:50.142 "compare": true, 00:08:50.142 "compare_and_write": true, 00:08:50.142 "abort": true, 00:08:50.142 "seek_hole": false, 00:08:50.142 "seek_data": false, 00:08:50.142 "copy": true, 00:08:50.142 "nvme_iov_md": false 00:08:50.142 }, 00:08:50.142 "memory_domains": [ 00:08:50.142 { 00:08:50.142 "dma_device_id": "system", 00:08:50.142 "dma_device_type": 1 00:08:50.142 } 00:08:50.142 ], 00:08:50.142 "driver_specific": { 00:08:50.142 "nvme": [ 00:08:50.142 { 00:08:50.142 "trid": { 00:08:50.142 "trtype": "TCP", 00:08:50.142 "adrfam": "IPv4", 00:08:50.142 "traddr": "10.0.0.2", 00:08:50.142 "trsvcid": "4420", 00:08:50.142 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:50.142 }, 00:08:50.142 "ctrlr_data": { 00:08:50.142 "cntlid": 1, 00:08:50.142 "vendor_id": "0x8086", 00:08:50.142 "model_number": "SPDK bdev Controller", 00:08:50.142 "serial_number": "SPDK0", 00:08:50.142 "firmware_revision": "25.01", 00:08:50.142 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:50.142 "oacs": { 00:08:50.142 "security": 0, 00:08:50.142 "format": 0, 00:08:50.142 "firmware": 0, 00:08:50.142 "ns_manage": 0 00:08:50.142 }, 00:08:50.142 "multi_ctrlr": true, 00:08:50.142 "ana_reporting": false 00:08:50.142 }, 00:08:50.142 "vs": { 00:08:50.142 "nvme_version": "1.3" 00:08:50.142 }, 00:08:50.142 "ns_data": { 00:08:50.142 "id": 1, 00:08:50.142 "can_share": true 00:08:50.142 } 00:08:50.142 } 00:08:50.142 ], 00:08:50.142 "mp_policy": "active_passive" 00:08:50.142 } 00:08:50.142 } 00:08:50.142 ] 00:08:50.142 02:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=134778 00:08:50.142 02:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:50.142 02:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:50.142 Running I/O for 10 seconds... 00:08:51.543 Latency(us) 00:08:51.543 [2024-11-19T01:50:02.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.543 Nvme0n1 : 1.00 14924.00 58.30 0.00 0.00 0.00 0.00 0.00 00:08:51.543 [2024-11-19T01:50:02.158Z] =================================================================================================================== 00:08:51.543 [2024-11-19T01:50:02.158Z] Total : 14924.00 58.30 0.00 0.00 0.00 0.00 0.00 00:08:51.543 00:08:52.108 02:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d22317dd-a054-4115-8ece-dac05a78d056 00:08:52.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.365 Nvme0n1 : 2.00 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:08:52.365 [2024-11-19T01:50:02.980Z] =================================================================================================================== 00:08:52.365 [2024-11-19T01:50:02.980Z] Total : 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:08:52.365 00:08:52.365 true 00:08:52.365 02:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d22317dd-a054-4115-8ece-dac05a78d056 00:08:52.365 02:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:52.622 02:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:52.622 02:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:52.622 02:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 134778 00:08:53.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.186 Nvme0n1 : 3.00 15283.00 59.70 0.00 0.00 0.00 0.00 0.00 00:08:53.186 [2024-11-19T01:50:03.801Z] =================================================================================================================== 00:08:53.186 [2024-11-19T01:50:03.801Z] Total : 15283.00 59.70 0.00 0.00 0.00 0.00 0.00 00:08:53.186 00:08:54.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.561 Nvme0n1 : 4.00 15367.50 60.03 0.00 0.00 0.00 0.00 0.00 00:08:54.561 [2024-11-19T01:50:05.176Z] =================================================================================================================== 00:08:54.561 [2024-11-19T01:50:05.176Z] Total : 15367.50 60.03 0.00 0.00 0.00 0.00 0.00 00:08:54.561 00:08:55.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.495 Nvme0n1 : 5.00 15431.20 60.28 0.00 0.00 0.00 0.00 0.00 00:08:55.495 [2024-11-19T01:50:06.110Z] =================================================================================================================== 00:08:55.495 [2024-11-19T01:50:06.110Z] Total : 15431.20 60.28 0.00 0.00 0.00 0.00 0.00 00:08:55.495 00:08:56.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.431 Nvme0n1 : 6.00 15505.17 60.57 0.00 0.00 0.00 0.00 0.00 00:08:56.431 [2024-11-19T01:50:07.046Z] =================================================================================================================== 00:08:56.431 [2024-11-19T01:50:07.046Z] Total : 15505.17 60.57 0.00 0.00 0.00 0.00 0.00 00:08:56.431 00:08:57.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.366 Nvme0n1 : 7.00 15539.86 60.70 0.00 0.00 0.00 0.00 0.00 00:08:57.366 [2024-11-19T01:50:07.981Z] =================================================================================================================== 00:08:57.366 [2024-11-19T01:50:07.981Z] Total : 15539.86 60.70 0.00 0.00 0.00 0.00 0.00 00:08:57.366 00:08:58.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.303 Nvme0n1 : 8.00 15570.12 60.82 0.00 0.00 0.00 0.00 0.00 00:08:58.303 [2024-11-19T01:50:08.918Z] =================================================================================================================== 00:08:58.303 [2024-11-19T01:50:08.918Z] Total : 15570.12 60.82 0.00 0.00 0.00 0.00 0.00 00:08:58.303 00:08:59.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.237 Nvme0n1 : 9.00 15604.00 60.95 0.00 0.00 0.00 0.00 0.00 00:08:59.237 [2024-11-19T01:50:09.852Z] =================================================================================================================== 00:08:59.237 [2024-11-19T01:50:09.853Z] Total : 15604.00 60.95 0.00 0.00 0.00 0.00 0.00 00:08:59.238 00:09:00.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.175 Nvme0n1 : 10.00 15593.60 60.91 0.00 0.00 0.00 0.00 0.00 00:09:00.175 [2024-11-19T01:50:10.790Z] =================================================================================================================== 00:09:00.175 [2024-11-19T01:50:10.790Z] Total : 15593.60 60.91 0.00 0.00 0.00 0.00 0.00 00:09:00.175 00:09:00.175 00:09:00.175 Latency(us) 00:09:00.175 [2024-11-19T01:50:10.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.175 Nvme0n1 : 10.00 15600.78 60.94 0.00 0.00 8200.24 4296.25 16505.36 00:09:00.175 [2024-11-19T01:50:10.790Z] =================================================================================================================== 00:09:00.175 [2024-11-19T01:50:10.790Z] Total : 15600.78 60.94 0.00 0.00 8200.24 4296.25 16505.36 00:09:00.175 { 00:09:00.175 "results": [ 00:09:00.175 { 00:09:00.175 "job": "Nvme0n1", 00:09:00.175 "core_mask": "0x2", 00:09:00.175 "workload": "randwrite", 00:09:00.175 "status": "finished", 00:09:00.175 "queue_depth": 128, 00:09:00.175 "io_size": 4096, 00:09:00.175 "runtime": 10.003603, 00:09:00.175 "iops": 15600.779039312136, 00:09:00.175 "mibps": 60.94054312231303, 00:09:00.175 "io_failed": 0, 00:09:00.175 "io_timeout": 0, 00:09:00.175 "avg_latency_us": 8200.235921597217, 00:09:00.175 "min_latency_us": 4296.248888888889, 00:09:00.175 "max_latency_us": 16505.36296296296 00:09:00.175 } 00:09:00.175 ], 00:09:00.175 "core_count": 1 00:09:00.175 } 00:09:00.434 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 134646 00:09:00.434 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 134646 ']' 00:09:00.434 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 134646 00:09:00.434 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:00.434 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.434 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 134646 00:09:00.434 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:00.434 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:00.434 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 134646' 00:09:00.434 killing process with pid 134646 00:09:00.434 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 134646 00:09:00.434 Received shutdown signal, test time was about 10.000000 seconds 00:09:00.434 00:09:00.434 Latency(us) 00:09:00.434 [2024-11-19T01:50:11.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.434 [2024-11-19T01:50:11.049Z] =================================================================================================================== 00:09:00.434 [2024-11-19T01:50:11.049Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:00.434 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 134646 00:09:00.434 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:00.692 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:01.259 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d22317dd-a054-4115-8ece-dac05a78d056 00:09:01.259 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:01.259 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:01.259 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:01.259 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 132070 00:09:01.259 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 132070 00:09:01.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 132070 Killed "${NVMF_APP[@]}" "$@" 00:09:01.519 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:01.519 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:01.519 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:01.519 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:01.519 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.519 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=136120 00:09:01.519 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:01.519 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 136120 00:09:01.519 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 136120 ']' 00:09:01.519 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.519 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.519 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.519 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.519 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.519 [2024-11-19 02:50:11.966499] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:01.519 [2024-11-19 02:50:11.966603] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.519 [2024-11-19 02:50:12.039565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.519 [2024-11-19 02:50:12.085510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.519 [2024-11-19 02:50:12.085576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.519 [2024-11-19 02:50:12.085605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.519 [2024-11-19 02:50:12.085616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.519 [2024-11-19 02:50:12.085625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.519 [2024-11-19 02:50:12.086253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.778 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.778 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:01.778 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:01.778 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:01.778 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.778 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.778 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:02.036 [2024-11-19 02:50:12.469788] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:02.036 [2024-11-19 02:50:12.469919] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:02.037 [2024-11-19 02:50:12.469972] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:02.037 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:02.037 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d7ae7bda-4648-4a8b-8574-6a83f1d8c881 00:09:02.037 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d7ae7bda-4648-4a8b-8574-6a83f1d8c881 00:09:02.037 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.037 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:02.037 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.037 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.037 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:02.295 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d7ae7bda-4648-4a8b-8574-6a83f1d8c881 -t 2000 00:09:02.554 [ 00:09:02.554 { 00:09:02.554 "name": "d7ae7bda-4648-4a8b-8574-6a83f1d8c881", 00:09:02.554 "aliases": [ 00:09:02.554 "lvs/lvol" 00:09:02.554 ], 00:09:02.554 "product_name": "Logical Volume", 00:09:02.554 "block_size": 4096, 00:09:02.554 "num_blocks": 38912, 00:09:02.554 "uuid": "d7ae7bda-4648-4a8b-8574-6a83f1d8c881", 00:09:02.554 "assigned_rate_limits": { 00:09:02.554 "rw_ios_per_sec": 0, 00:09:02.554 "rw_mbytes_per_sec": 0, 00:09:02.554 "r_mbytes_per_sec": 0, 00:09:02.554 "w_mbytes_per_sec": 0 00:09:02.554 }, 00:09:02.554 "claimed": false, 00:09:02.554 "zoned": false, 00:09:02.554 "supported_io_types": { 00:09:02.554 "read": true, 00:09:02.554 "write": true, 00:09:02.554 "unmap": true, 00:09:02.554 "flush": false, 00:09:02.554 "reset": true, 00:09:02.554 "nvme_admin": false, 00:09:02.554 "nvme_io": false, 00:09:02.554 "nvme_io_md": false, 00:09:02.554 "write_zeroes": true, 00:09:02.554 "zcopy": false, 00:09:02.554 "get_zone_info": false, 00:09:02.554 "zone_management": false, 00:09:02.554 "zone_append": false, 00:09:02.554 "compare": false, 00:09:02.554 "compare_and_write": false, 00:09:02.554 "abort": false, 00:09:02.554 "seek_hole": true, 00:09:02.554 "seek_data": true, 00:09:02.554 "copy": false, 00:09:02.554 "nvme_iov_md": false 00:09:02.554 }, 00:09:02.554 "driver_specific": { 00:09:02.554 "lvol": { 00:09:02.554 "lvol_store_uuid": "d22317dd-a054-4115-8ece-dac05a78d056", 00:09:02.554 "base_bdev": "aio_bdev", 00:09:02.554 "thin_provision": false, 00:09:02.554 "num_allocated_clusters": 38, 00:09:02.554 "snapshot": false, 00:09:02.554 "clone": false, 00:09:02.554 "esnap_clone": false 00:09:02.554 } 00:09:02.554 } 00:09:02.554 } 00:09:02.554 ] 00:09:02.554 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:02.554 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d22317dd-a054-4115-8ece-dac05a78d056 00:09:02.554 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:02.812 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:02.812 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d22317dd-a054-4115-8ece-dac05a78d056 00:09:02.812 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:03.071 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:03.071 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:03.329 [2024-11-19 02:50:13.831541] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:03.329 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d22317dd-a054-4115-8ece-dac05a78d056 00:09:03.329 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:03.329 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d22317dd-a054-4115-8ece-dac05a78d056 00:09:03.329 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.329 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.330 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.330 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.330 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.330 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.330 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.330 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:03.330 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d22317dd-a054-4115-8ece-dac05a78d056 00:09:03.589 request: 00:09:03.589 { 00:09:03.589 "uuid": "d22317dd-a054-4115-8ece-dac05a78d056", 00:09:03.589 "method": "bdev_lvol_get_lvstores", 00:09:03.589 "req_id": 1 00:09:03.589 } 00:09:03.589 Got JSON-RPC error response 00:09:03.589 response: 00:09:03.589 { 00:09:03.589 "code": -19, 00:09:03.589 "message": "No such device" 00:09:03.589 } 00:09:03.589 02:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:03.589 02:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:03.589 02:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:03.589 02:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:03.589 02:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:03.848 aio_bdev 00:09:03.848 02:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d7ae7bda-4648-4a8b-8574-6a83f1d8c881 00:09:03.848 02:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d7ae7bda-4648-4a8b-8574-6a83f1d8c881 00:09:03.848 02:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.848 02:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:03.848 02:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.848 02:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.848 02:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:04.107 02:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d7ae7bda-4648-4a8b-8574-6a83f1d8c881 -t 2000 00:09:04.366 [ 00:09:04.366 { 00:09:04.366 "name": "d7ae7bda-4648-4a8b-8574-6a83f1d8c881", 00:09:04.366 "aliases": [ 00:09:04.366 "lvs/lvol" 00:09:04.366 ], 00:09:04.366 "product_name": "Logical Volume", 00:09:04.366 "block_size": 4096, 00:09:04.366 "num_blocks": 38912, 00:09:04.366 "uuid": "d7ae7bda-4648-4a8b-8574-6a83f1d8c881", 00:09:04.366 "assigned_rate_limits": { 00:09:04.366 "rw_ios_per_sec": 0, 00:09:04.366 "rw_mbytes_per_sec": 0, 00:09:04.366 "r_mbytes_per_sec": 0, 00:09:04.366 "w_mbytes_per_sec": 0 00:09:04.366 }, 00:09:04.366 "claimed": false, 00:09:04.366 "zoned": false, 00:09:04.366 "supported_io_types": { 00:09:04.366 "read": true, 00:09:04.366 "write": true, 00:09:04.366 "unmap": true, 00:09:04.366 "flush": false, 00:09:04.366 "reset": true, 00:09:04.366 "nvme_admin": false, 00:09:04.366 "nvme_io": false, 00:09:04.366 "nvme_io_md": false, 00:09:04.366 "write_zeroes": true, 00:09:04.366 "zcopy": false, 00:09:04.366 "get_zone_info": false, 00:09:04.366 "zone_management": false, 00:09:04.366 "zone_append": false, 00:09:04.366 "compare": false, 00:09:04.366 "compare_and_write": false, 00:09:04.366 "abort": false, 00:09:04.366 "seek_hole": true, 00:09:04.366 "seek_data": true, 00:09:04.366 "copy": false, 00:09:04.366 "nvme_iov_md": false 00:09:04.366 }, 00:09:04.366 "driver_specific": { 00:09:04.366 "lvol": { 00:09:04.366 "lvol_store_uuid": "d22317dd-a054-4115-8ece-dac05a78d056", 00:09:04.366 "base_bdev": "aio_bdev", 00:09:04.366 "thin_provision": false, 00:09:04.366 "num_allocated_clusters": 38, 00:09:04.366 "snapshot": false, 00:09:04.366 "clone": false, 00:09:04.366 "esnap_clone": false 00:09:04.366 } 00:09:04.366 } 00:09:04.366 } 00:09:04.366 ] 00:09:04.366 02:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:04.366 02:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d22317dd-a054-4115-8ece-dac05a78d056 00:09:04.366 02:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:04.625 02:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:04.625 02:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d22317dd-a054-4115-8ece-dac05a78d056 00:09:04.625 02:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:04.884 02:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:04.884 02:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d7ae7bda-4648-4a8b-8574-6a83f1d8c881 00:09:05.143 02:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d22317dd-a054-4115-8ece-dac05a78d056 00:09:05.711 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:05.711 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:05.970 00:09:05.970 real 0m19.462s 00:09:05.970 user 0m49.327s 00:09:05.970 sys 0m4.542s 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:05.970 ************************************ 00:09:05.970 END TEST lvs_grow_dirty 00:09:05.970 ************************************ 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:05.970 nvmf_trace.0 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.970 rmmod nvme_tcp 00:09:05.970 rmmod nvme_fabrics 00:09:05.970 rmmod nvme_keyring 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 136120 ']' 00:09:05.970 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 136120 00:09:05.971 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 136120 ']' 00:09:05.971 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 136120 00:09:05.971 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:05.971 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.971 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 136120 00:09:05.971 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.971 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.971 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 136120' 00:09:05.971 killing process with pid 136120 00:09:05.971 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 136120 00:09:05.971 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 136120 00:09:06.232 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:06.232 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:06.232 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:06.232 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:06.232 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:06.232 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:06.232 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:06.232 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:06.232 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:06.232 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.232 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.232 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.147 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:08.147 00:09:08.147 real 0m42.702s 00:09:08.147 user 1m12.445s 00:09:08.147 sys 0m8.486s 00:09:08.148 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.148 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.148 ************************************ 00:09:08.148 END TEST nvmf_lvs_grow 00:09:08.148 ************************************ 00:09:08.148 02:50:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:08.148 02:50:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:08.148 02:50:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.148 02:50:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.148 ************************************ 00:09:08.148 START TEST nvmf_bdev_io_wait 00:09:08.148 ************************************ 00:09:08.148 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:08.408 * Looking for test storage... 00:09:08.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:08.408 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:08.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.409 --rc genhtml_branch_coverage=1 00:09:08.409 --rc genhtml_function_coverage=1 00:09:08.409 --rc genhtml_legend=1 00:09:08.409 --rc geninfo_all_blocks=1 00:09:08.409 --rc geninfo_unexecuted_blocks=1 00:09:08.409 00:09:08.409 ' 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:08.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.409 --rc genhtml_branch_coverage=1 00:09:08.409 --rc genhtml_function_coverage=1 00:09:08.409 --rc genhtml_legend=1 00:09:08.409 --rc geninfo_all_blocks=1 00:09:08.409 --rc geninfo_unexecuted_blocks=1 00:09:08.409 00:09:08.409 ' 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:08.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.409 --rc genhtml_branch_coverage=1 00:09:08.409 --rc genhtml_function_coverage=1 00:09:08.409 --rc genhtml_legend=1 00:09:08.409 --rc geninfo_all_blocks=1 00:09:08.409 --rc geninfo_unexecuted_blocks=1 00:09:08.409 00:09:08.409 ' 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:08.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.409 --rc genhtml_branch_coverage=1 00:09:08.409 --rc genhtml_function_coverage=1 00:09:08.409 --rc genhtml_legend=1 00:09:08.409 --rc geninfo_all_blocks=1 00:09:08.409 --rc geninfo_unexecuted_blocks=1 00:09:08.409 00:09:08.409 ' 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:08.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:08.409 02:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.950 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:10.951 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:10.951 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:10.951 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:10.951 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:10.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:09:10.951 00:09:10.951 --- 10.0.0.2 ping statistics --- 00:09:10.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.951 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:09:10.951 00:09:10.951 --- 10.0.0.1 ping statistics --- 00:09:10.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.951 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:10.951 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=138660 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 138660 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 138660 ']' 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.952 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.952 [2024-11-19 02:50:21.295062] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:10.952 [2024-11-19 02:50:21.295132] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.952 [2024-11-19 02:50:21.369475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.952 [2024-11-19 02:50:21.419075] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.952 [2024-11-19 02:50:21.419137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.952 [2024-11-19 02:50:21.419167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.952 [2024-11-19 02:50:21.419178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.952 [2024-11-19 02:50:21.419188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.952 [2024-11-19 02:50:21.420891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.952 [2024-11-19 02:50:21.420951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.952 [2024-11-19 02:50:21.420949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.952 [2024-11-19 02:50:21.420920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.212 [2024-11-19 02:50:21.694205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.212 Malloc0 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.212 [2024-11-19 02:50:21.746109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=138804 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=138806 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:11.212 { 00:09:11.212 "params": { 00:09:11.212 "name": "Nvme$subsystem", 00:09:11.212 "trtype": "$TEST_TRANSPORT", 00:09:11.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.212 "adrfam": "ipv4", 00:09:11.212 "trsvcid": "$NVMF_PORT", 00:09:11.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.212 "hdgst": ${hdgst:-false}, 00:09:11.212 "ddgst": ${ddgst:-false} 00:09:11.212 }, 00:09:11.212 "method": "bdev_nvme_attach_controller" 00:09:11.212 } 00:09:11.212 EOF 00:09:11.212 )") 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=138808 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:11.212 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:11.212 { 00:09:11.212 "params": { 00:09:11.212 "name": "Nvme$subsystem", 00:09:11.212 "trtype": "$TEST_TRANSPORT", 00:09:11.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.212 "adrfam": "ipv4", 00:09:11.212 "trsvcid": "$NVMF_PORT", 00:09:11.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.213 "hdgst": ${hdgst:-false}, 00:09:11.213 "ddgst": ${ddgst:-false} 00:09:11.213 }, 00:09:11.213 "method": "bdev_nvme_attach_controller" 00:09:11.213 } 00:09:11.213 EOF 00:09:11.213 )") 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=138811 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:11.213 { 00:09:11.213 "params": { 00:09:11.213 "name": "Nvme$subsystem", 00:09:11.213 "trtype": "$TEST_TRANSPORT", 00:09:11.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.213 "adrfam": "ipv4", 00:09:11.213 "trsvcid": "$NVMF_PORT", 00:09:11.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.213 "hdgst": ${hdgst:-false}, 00:09:11.213 "ddgst": ${ddgst:-false} 00:09:11.213 }, 00:09:11.213 "method": "bdev_nvme_attach_controller" 00:09:11.213 } 00:09:11.213 EOF 00:09:11.213 )") 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:11.213 { 00:09:11.213 "params": { 00:09:11.213 "name": "Nvme$subsystem", 00:09:11.213 "trtype": "$TEST_TRANSPORT", 00:09:11.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.213 "adrfam": "ipv4", 00:09:11.213 "trsvcid": "$NVMF_PORT", 00:09:11.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.213 "hdgst": ${hdgst:-false}, 00:09:11.213 "ddgst": ${ddgst:-false} 00:09:11.213 }, 00:09:11.213 "method": "bdev_nvme_attach_controller" 00:09:11.213 } 00:09:11.213 EOF 00:09:11.213 )") 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 138804 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:11.213 "params": { 00:09:11.213 "name": "Nvme1", 00:09:11.213 "trtype": "tcp", 00:09:11.213 "traddr": "10.0.0.2", 00:09:11.213 "adrfam": "ipv4", 00:09:11.213 "trsvcid": "4420", 00:09:11.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.213 "hdgst": false, 00:09:11.213 "ddgst": false 00:09:11.213 }, 00:09:11.213 "method": "bdev_nvme_attach_controller" 00:09:11.213 }' 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:11.213 "params": { 00:09:11.213 "name": "Nvme1", 00:09:11.213 "trtype": "tcp", 00:09:11.213 "traddr": "10.0.0.2", 00:09:11.213 "adrfam": "ipv4", 00:09:11.213 "trsvcid": "4420", 00:09:11.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.213 "hdgst": false, 00:09:11.213 "ddgst": false 00:09:11.213 }, 00:09:11.213 "method": "bdev_nvme_attach_controller" 00:09:11.213 }' 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:11.213 "params": { 00:09:11.213 "name": "Nvme1", 00:09:11.213 "trtype": "tcp", 00:09:11.213 "traddr": "10.0.0.2", 00:09:11.213 "adrfam": "ipv4", 00:09:11.213 "trsvcid": "4420", 00:09:11.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.213 "hdgst": false, 00:09:11.213 "ddgst": false 00:09:11.213 }, 00:09:11.213 "method": "bdev_nvme_attach_controller" 00:09:11.213 }' 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:11.213 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:11.213 "params": { 00:09:11.213 "name": "Nvme1", 00:09:11.213 "trtype": "tcp", 00:09:11.213 "traddr": "10.0.0.2", 00:09:11.213 "adrfam": "ipv4", 00:09:11.213 "trsvcid": "4420", 00:09:11.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.213 "hdgst": false, 00:09:11.213 "ddgst": false 00:09:11.213 }, 00:09:11.213 "method": "bdev_nvme_attach_controller" 00:09:11.213 }' 00:09:11.213 [2024-11-19 02:50:21.795171] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:11.213 [2024-11-19 02:50:21.795171] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:11.213 [2024-11-19 02:50:21.795252] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 02:50:21.795252] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:11.213 --proc-type=auto ] 00:09:11.213 [2024-11-19 02:50:21.796446] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:11.213 [2024-11-19 02:50:21.796446] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:11.213 [2024-11-19 02:50:21.796523] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 02:50:21.796523] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:11.213 --proc-type=auto ] 00:09:11.472 [2024-11-19 02:50:21.976256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.472 [2024-11-19 02:50:22.018867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:11.472 [2024-11-19 02:50:22.078028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.731 [2024-11-19 02:50:22.122067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:11.731 [2024-11-19 02:50:22.152963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.731 [2024-11-19 02:50:22.190324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:11.731 [2024-11-19 02:50:22.227382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.731 [2024-11-19 02:50:22.265598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:11.990 Running I/O for 1 seconds... 00:09:11.990 Running I/O for 1 seconds... 00:09:11.990 Running I/O for 1 seconds... 00:09:11.990 Running I/O for 1 seconds... 00:09:12.928 6077.00 IOPS, 23.74 MiB/s [2024-11-19T01:50:23.543Z] 8410.00 IOPS, 32.85 MiB/s [2024-11-19T01:50:23.543Z] 200448.00 IOPS, 783.00 MiB/s 00:09:12.928 Latency(us) 00:09:12.928 [2024-11-19T01:50:23.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.928 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:12.928 Nvme1n1 : 1.01 8467.82 33.08 0.00 0.00 15041.86 7330.32 27185.30 00:09:12.928 [2024-11-19T01:50:23.543Z] =================================================================================================================== 00:09:12.928 [2024-11-19T01:50:23.543Z] Total : 8467.82 33.08 0.00 0.00 15041.86 7330.32 27185.30 00:09:12.928 00:09:12.928 Latency(us) 00:09:12.928 [2024-11-19T01:50:23.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.928 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:12.928 Nvme1n1 : 1.00 200076.26 781.55 0.00 0.00 636.20 288.24 1832.58 00:09:12.928 [2024-11-19T01:50:23.543Z] =================================================================================================================== 00:09:12.928 [2024-11-19T01:50:23.543Z] Total : 200076.26 781.55 0.00 0.00 636.20 288.24 1832.58 00:09:12.928 00:09:12.928 Latency(us) 00:09:12.928 [2024-11-19T01:50:23.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.928 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:12.928 Nvme1n1 : 1.02 6096.14 23.81 0.00 0.00 20787.95 11019.76 31845.64 00:09:12.928 [2024-11-19T01:50:23.543Z] =================================================================================================================== 00:09:12.928 [2024-11-19T01:50:23.543Z] Total : 6096.14 23.81 0.00 0.00 20787.95 11019.76 31845.64 00:09:12.928 6117.00 IOPS, 23.89 MiB/s 00:09:12.928 Latency(us) 00:09:12.928 [2024-11-19T01:50:23.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.928 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:12.928 Nvme1n1 : 1.01 6227.68 24.33 0.00 0.00 20498.06 3640.89 45826.65 00:09:12.928 [2024-11-19T01:50:23.543Z] =================================================================================================================== 00:09:12.928 [2024-11-19T01:50:23.543Z] Total : 6227.68 24.33 0.00 0.00 20498.06 3640.89 45826.65 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 138806 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 138808 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 138811 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:13.187 rmmod nvme_tcp 00:09:13.187 rmmod nvme_fabrics 00:09:13.187 rmmod nvme_keyring 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 138660 ']' 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 138660 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 138660 ']' 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 138660 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 138660 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 138660' 00:09:13.187 killing process with pid 138660 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 138660 00:09:13.187 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 138660 00:09:13.446 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:13.446 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:13.446 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:13.446 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:13.446 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:13.446 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:13.446 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:13.446 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:13.446 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:13.446 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.446 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.446 02:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.360 02:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:15.360 00:09:15.360 real 0m7.191s 00:09:15.360 user 0m15.226s 00:09:15.360 sys 0m3.596s 00:09:15.360 02:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.360 02:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.360 ************************************ 00:09:15.360 END TEST nvmf_bdev_io_wait 00:09:15.360 ************************************ 00:09:15.360 02:50:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:15.360 02:50:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:15.360 02:50:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.360 02:50:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:15.620 ************************************ 00:09:15.620 START TEST nvmf_queue_depth 00:09:15.620 ************************************ 00:09:15.620 02:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:15.620 * Looking for test storage... 00:09:15.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.620 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:15.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.621 --rc genhtml_branch_coverage=1 00:09:15.621 --rc genhtml_function_coverage=1 00:09:15.621 --rc genhtml_legend=1 00:09:15.621 --rc geninfo_all_blocks=1 00:09:15.621 --rc geninfo_unexecuted_blocks=1 00:09:15.621 00:09:15.621 ' 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:15.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.621 --rc genhtml_branch_coverage=1 00:09:15.621 --rc genhtml_function_coverage=1 00:09:15.621 --rc genhtml_legend=1 00:09:15.621 --rc geninfo_all_blocks=1 00:09:15.621 --rc geninfo_unexecuted_blocks=1 00:09:15.621 00:09:15.621 ' 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:15.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.621 --rc genhtml_branch_coverage=1 00:09:15.621 --rc genhtml_function_coverage=1 00:09:15.621 --rc genhtml_legend=1 00:09:15.621 --rc geninfo_all_blocks=1 00:09:15.621 --rc geninfo_unexecuted_blocks=1 00:09:15.621 00:09:15.621 ' 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:15.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.621 --rc genhtml_branch_coverage=1 00:09:15.621 --rc genhtml_function_coverage=1 00:09:15.621 --rc genhtml_legend=1 00:09:15.621 --rc geninfo_all_blocks=1 00:09:15.621 --rc geninfo_unexecuted_blocks=1 00:09:15.621 00:09:15.621 ' 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:15.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:15.621 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:18.159 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:18.159 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:18.159 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:18.159 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:18.159 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:18.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:09:18.160 00:09:18.160 --- 10.0.0.2 ping statistics --- 00:09:18.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.160 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:09:18.160 00:09:18.160 --- 10.0.0.1 ping statistics --- 00:09:18.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.160 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=140948 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 140948 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 140948 ']' 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:18.160 [2024-11-19 02:50:28.537357] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:18.160 [2024-11-19 02:50:28.537445] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.160 [2024-11-19 02:50:28.611698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.160 [2024-11-19 02:50:28.653052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.160 [2024-11-19 02:50:28.653114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.160 [2024-11-19 02:50:28.653141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.160 [2024-11-19 02:50:28.653152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.160 [2024-11-19 02:50:28.653162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.160 [2024-11-19 02:50:28.653733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:18.160 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:18.419 [2024-11-19 02:50:28.788790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:18.419 Malloc0 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:18.419 [2024-11-19 02:50:28.837660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=141061 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 141061 /var/tmp/bdevperf.sock 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 141061 ']' 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:18.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.419 02:50:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:18.419 [2024-11-19 02:50:28.884755] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:18.419 [2024-11-19 02:50:28.884839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141061 ] 00:09:18.419 [2024-11-19 02:50:28.950210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.420 [2024-11-19 02:50:28.995362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.678 02:50:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.678 02:50:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:18.678 02:50:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:18.678 02:50:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.678 02:50:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:18.937 NVMe0n1 00:09:18.937 02:50:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.937 02:50:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:18.937 Running I/O for 10 seconds... 00:09:21.251 8194.00 IOPS, 32.01 MiB/s [2024-11-19T01:50:32.804Z] 8704.00 IOPS, 34.00 MiB/s [2024-11-19T01:50:33.740Z] 8672.67 IOPS, 33.88 MiB/s [2024-11-19T01:50:34.676Z] 8703.25 IOPS, 34.00 MiB/s [2024-11-19T01:50:35.614Z] 8799.00 IOPS, 34.37 MiB/s [2024-11-19T01:50:36.551Z] 8806.67 IOPS, 34.40 MiB/s [2024-11-19T01:50:37.488Z] 8791.00 IOPS, 34.34 MiB/s [2024-11-19T01:50:38.865Z] 8829.00 IOPS, 34.49 MiB/s [2024-11-19T01:50:39.800Z] 8868.33 IOPS, 34.64 MiB/s [2024-11-19T01:50:39.800Z] 8901.90 IOPS, 34.77 MiB/s 00:09:29.185 Latency(us) 00:09:29.185 [2024-11-19T01:50:39.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.185 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:29.185 Verification LBA range: start 0x0 length 0x4000 00:09:29.185 NVMe0n1 : 10.08 8931.07 34.89 0.00 0.00 114213.67 21651.15 71458.51 00:09:29.185 [2024-11-19T01:50:39.800Z] =================================================================================================================== 00:09:29.185 [2024-11-19T01:50:39.800Z] Total : 8931.07 34.89 0.00 0.00 114213.67 21651.15 71458.51 00:09:29.185 { 00:09:29.185 "results": [ 00:09:29.185 { 00:09:29.185 "job": "NVMe0n1", 00:09:29.185 "core_mask": "0x1", 00:09:29.185 "workload": "verify", 00:09:29.185 "status": "finished", 00:09:29.185 "verify_range": { 00:09:29.185 "start": 0, 00:09:29.185 "length": 16384 00:09:29.185 }, 00:09:29.185 "queue_depth": 1024, 00:09:29.185 "io_size": 4096, 00:09:29.185 "runtime": 10.080652, 00:09:29.185 "iops": 8931.069141162694, 00:09:29.185 "mibps": 34.88698883266677, 00:09:29.185 "io_failed": 0, 00:09:29.185 "io_timeout": 0, 00:09:29.185 "avg_latency_us": 114213.67491747081, 00:09:29.185 "min_latency_us": 21651.152592592593, 00:09:29.185 "max_latency_us": 71458.5125925926 00:09:29.185 } 00:09:29.185 ], 00:09:29.185 "core_count": 1 00:09:29.185 } 00:09:29.185 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 141061 00:09:29.185 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 141061 ']' 00:09:29.185 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 141061 00:09:29.185 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:29.185 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.185 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 141061 00:09:29.185 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.185 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.185 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 141061' 00:09:29.185 killing process with pid 141061 00:09:29.185 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 141061 00:09:29.185 Received shutdown signal, test time was about 10.000000 seconds 00:09:29.185 00:09:29.185 Latency(us) 00:09:29.185 [2024-11-19T01:50:39.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.185 [2024-11-19T01:50:39.800Z] =================================================================================================================== 00:09:29.185 [2024-11-19T01:50:39.800Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:29.185 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 141061 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:29.445 rmmod nvme_tcp 00:09:29.445 rmmod nvme_fabrics 00:09:29.445 rmmod nvme_keyring 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 140948 ']' 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 140948 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 140948 ']' 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 140948 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 140948 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 140948' 00:09:29.445 killing process with pid 140948 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 140948 00:09:29.445 02:50:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 140948 00:09:29.707 02:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:29.707 02:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:29.707 02:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:29.707 02:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:29.707 02:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:29.707 02:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:29.707 02:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:29.707 02:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:29.707 02:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:29.707 02:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.707 02:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.707 02:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.616 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:31.616 00:09:31.616 real 0m16.184s 00:09:31.616 user 0m22.651s 00:09:31.616 sys 0m3.172s 00:09:31.616 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.616 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.616 ************************************ 00:09:31.616 END TEST nvmf_queue_depth 00:09:31.616 ************************************ 00:09:31.616 02:50:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:31.616 02:50:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:31.616 02:50:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.616 02:50:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:31.616 ************************************ 00:09:31.616 START TEST nvmf_target_multipath 00:09:31.616 ************************************ 00:09:31.616 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:31.876 * Looking for test storage... 00:09:31.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:31.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.876 --rc genhtml_branch_coverage=1 00:09:31.876 --rc genhtml_function_coverage=1 00:09:31.876 --rc genhtml_legend=1 00:09:31.876 --rc geninfo_all_blocks=1 00:09:31.876 --rc geninfo_unexecuted_blocks=1 00:09:31.876 00:09:31.876 ' 00:09:31.876 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:31.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.876 --rc genhtml_branch_coverage=1 00:09:31.876 --rc genhtml_function_coverage=1 00:09:31.877 --rc genhtml_legend=1 00:09:31.877 --rc geninfo_all_blocks=1 00:09:31.877 --rc geninfo_unexecuted_blocks=1 00:09:31.877 00:09:31.877 ' 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:31.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.877 --rc genhtml_branch_coverage=1 00:09:31.877 --rc genhtml_function_coverage=1 00:09:31.877 --rc genhtml_legend=1 00:09:31.877 --rc geninfo_all_blocks=1 00:09:31.877 --rc geninfo_unexecuted_blocks=1 00:09:31.877 00:09:31.877 ' 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:31.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.877 --rc genhtml_branch_coverage=1 00:09:31.877 --rc genhtml_function_coverage=1 00:09:31.877 --rc genhtml_legend=1 00:09:31.877 --rc geninfo_all_blocks=1 00:09:31.877 --rc geninfo_unexecuted_blocks=1 00:09:31.877 00:09:31.877 ' 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:31.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:31.877 02:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:34.418 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:34.418 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.418 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:34.419 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:34.419 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:34.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:09:34.419 00:09:34.419 --- 10.0.0.2 ping statistics --- 00:09:34.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.419 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:09:34.419 00:09:34.419 --- 10.0.0.1 ping statistics --- 00:09:34.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.419 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:34.419 only one NIC for nvmf test 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.419 rmmod nvme_tcp 00:09:34.419 rmmod nvme_fabrics 00:09:34.419 rmmod nvme_keyring 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.419 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.331 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.591 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:36.591 00:09:36.591 real 0m4.723s 00:09:36.591 user 0m0.972s 00:09:36.591 sys 0m1.764s 00:09:36.591 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.591 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:36.591 ************************************ 00:09:36.591 END TEST nvmf_target_multipath 00:09:36.591 ************************************ 00:09:36.591 02:50:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:36.591 02:50:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.591 02:50:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.591 02:50:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.591 ************************************ 00:09:36.591 START TEST nvmf_zcopy 00:09:36.591 ************************************ 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:36.591 * Looking for test storage... 00:09:36.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.591 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.592 --rc genhtml_branch_coverage=1 00:09:36.592 --rc genhtml_function_coverage=1 00:09:36.592 --rc genhtml_legend=1 00:09:36.592 --rc geninfo_all_blocks=1 00:09:36.592 --rc geninfo_unexecuted_blocks=1 00:09:36.592 00:09:36.592 ' 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.592 --rc genhtml_branch_coverage=1 00:09:36.592 --rc genhtml_function_coverage=1 00:09:36.592 --rc genhtml_legend=1 00:09:36.592 --rc geninfo_all_blocks=1 00:09:36.592 --rc geninfo_unexecuted_blocks=1 00:09:36.592 00:09:36.592 ' 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.592 --rc genhtml_branch_coverage=1 00:09:36.592 --rc genhtml_function_coverage=1 00:09:36.592 --rc genhtml_legend=1 00:09:36.592 --rc geninfo_all_blocks=1 00:09:36.592 --rc geninfo_unexecuted_blocks=1 00:09:36.592 00:09:36.592 ' 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.592 --rc genhtml_branch_coverage=1 00:09:36.592 --rc genhtml_function_coverage=1 00:09:36.592 --rc genhtml_legend=1 00:09:36.592 --rc geninfo_all_blocks=1 00:09:36.592 --rc geninfo_unexecuted_blocks=1 00:09:36.592 00:09:36.592 ' 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:36.592 02:50:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:39.132 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:39.133 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:39.133 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:39.133 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:39.133 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:39.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:09:39.133 00:09:39.133 --- 10.0.0.2 ping statistics --- 00:09:39.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.133 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:39.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:09:39.133 00:09:39.133 --- 10.0.0.1 ping statistics --- 00:09:39.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.133 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=146268 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 146268 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 146268 ']' 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.133 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.133 [2024-11-19 02:50:49.541290] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:39.133 [2024-11-19 02:50:49.541392] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.133 [2024-11-19 02:50:49.617572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.133 [2024-11-19 02:50:49.663869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.134 [2024-11-19 02:50:49.663926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.134 [2024-11-19 02:50:49.663955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.134 [2024-11-19 02:50:49.663974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.134 [2024-11-19 02:50:49.663984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.134 [2024-11-19 02:50:49.664644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.393 [2024-11-19 02:50:49.810394] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.393 [2024-11-19 02:50:49.826592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.393 malloc0 00:09:39.393 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.394 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:39.394 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.394 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.394 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.394 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:39.394 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:39.394 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:39.394 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:39.394 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:39.394 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:39.394 { 00:09:39.394 "params": { 00:09:39.394 "name": "Nvme$subsystem", 00:09:39.394 "trtype": "$TEST_TRANSPORT", 00:09:39.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.394 "adrfam": "ipv4", 00:09:39.394 "trsvcid": "$NVMF_PORT", 00:09:39.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.394 "hdgst": ${hdgst:-false}, 00:09:39.394 "ddgst": ${ddgst:-false} 00:09:39.394 }, 00:09:39.394 "method": "bdev_nvme_attach_controller" 00:09:39.394 } 00:09:39.394 EOF 00:09:39.394 )") 00:09:39.394 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:39.394 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:39.394 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:39.394 02:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:39.394 "params": { 00:09:39.394 "name": "Nvme1", 00:09:39.394 "trtype": "tcp", 00:09:39.394 "traddr": "10.0.0.2", 00:09:39.394 "adrfam": "ipv4", 00:09:39.394 "trsvcid": "4420", 00:09:39.394 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.394 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.394 "hdgst": false, 00:09:39.394 "ddgst": false 00:09:39.394 }, 00:09:39.394 "method": "bdev_nvme_attach_controller" 00:09:39.394 }' 00:09:39.394 [2024-11-19 02:50:49.911841] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:39.394 [2024-11-19 02:50:49.911919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146300 ] 00:09:39.394 [2024-11-19 02:50:49.983803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.652 [2024-11-19 02:50:50.035753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.911 Running I/O for 10 seconds... 00:09:41.785 5818.00 IOPS, 45.45 MiB/s [2024-11-19T01:50:53.780Z] 5840.00 IOPS, 45.62 MiB/s [2024-11-19T01:50:54.737Z] 5848.00 IOPS, 45.69 MiB/s [2024-11-19T01:50:55.734Z] 5851.50 IOPS, 45.71 MiB/s [2024-11-19T01:50:56.744Z] 5846.80 IOPS, 45.68 MiB/s [2024-11-19T01:50:57.760Z] 5856.17 IOPS, 45.75 MiB/s [2024-11-19T01:50:58.768Z] 5848.14 IOPS, 45.69 MiB/s [2024-11-19T01:50:59.774Z] 5853.25 IOPS, 45.73 MiB/s [2024-11-19T01:51:00.785Z] 5856.33 IOPS, 45.75 MiB/s [2024-11-19T01:51:00.785Z] 5855.10 IOPS, 45.74 MiB/s 00:09:50.170 Latency(us) 00:09:50.170 [2024-11-19T01:51:00.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.170 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:50.170 Verification LBA range: start 0x0 length 0x1000 00:09:50.170 Nvme1n1 : 10.01 5857.44 45.76 0.00 0.00 21794.36 3786.52 29515.47 00:09:50.170 [2024-11-19T01:51:00.785Z] =================================================================================================================== 00:09:50.170 [2024-11-19T01:51:00.785Z] Total : 5857.44 45.76 0.00 0.00 21794.36 3786.52 29515.47 00:09:50.170 02:51:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=147684 00:09:50.170 02:51:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:50.170 02:51:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.170 02:51:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:50.170 02:51:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:50.170 02:51:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:50.170 02:51:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:50.170 02:51:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:50.170 02:51:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:50.170 { 00:09:50.170 "params": { 00:09:50.170 "name": "Nvme$subsystem", 00:09:50.170 "trtype": "$TEST_TRANSPORT", 00:09:50.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:50.170 "adrfam": "ipv4", 00:09:50.170 "trsvcid": "$NVMF_PORT", 00:09:50.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:50.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:50.170 "hdgst": ${hdgst:-false}, 00:09:50.170 "ddgst": ${ddgst:-false} 00:09:50.170 }, 00:09:50.170 "method": "bdev_nvme_attach_controller" 00:09:50.170 } 00:09:50.170 EOF 00:09:50.170 )") 00:09:50.170 02:51:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:50.170 [2024-11-19 02:51:00.610088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.170 [2024-11-19 02:51:00.610128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.170 02:51:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:50.170 02:51:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:50.171 02:51:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:50.171 "params": { 00:09:50.171 "name": "Nvme1", 00:09:50.171 "trtype": "tcp", 00:09:50.171 "traddr": "10.0.0.2", 00:09:50.171 "adrfam": "ipv4", 00:09:50.171 "trsvcid": "4420", 00:09:50.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:50.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:50.171 "hdgst": false, 00:09:50.171 "ddgst": false 00:09:50.171 }, 00:09:50.171 "method": "bdev_nvme_attach_controller" 00:09:50.171 }' 00:09:50.171 [2024-11-19 02:51:00.618045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.618068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.626073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.626094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.634094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.634114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.642130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.642149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.649604] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:50.171 [2024-11-19 02:51:00.649680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147684 ] 00:09:50.171 [2024-11-19 02:51:00.650136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.650156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.658162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.658183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.666178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.666197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.674198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.674218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.682222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.682241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.690275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.690295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.698278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.698298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.706319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.706340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.714327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.714349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.721373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.171 [2024-11-19 02:51:00.722348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.722385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.730401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.730439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.738410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.738443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.746410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.746431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.754431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.754451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.762451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.762471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.768239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.171 [2024-11-19 02:51:00.770473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.770492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.171 [2024-11-19 02:51:00.778516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.171 [2024-11-19 02:51:00.778537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.786571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.786616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.794628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.794703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.802592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.802624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.810614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.810652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.818637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.818697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.826661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.826720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.834653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.834694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.842743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.842778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.850769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.850818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.858768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.858803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.866765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.866785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.874777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.874798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.883084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.883109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.891115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.891144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.899138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.899159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.907163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.907185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.915184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.915206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.923208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.923230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.931229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.931250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.939267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.939287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.947273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.947303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.955296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.955315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.963321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.963342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.971342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.971363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.979362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.979381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.987385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.987404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:00.995406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:00.995425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:01.003430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:01.003449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:01.011455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:01.011475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:01.019540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:01.019567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:01.027503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:01.027526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 Running I/O for 5 seconds... 00:09:50.453 [2024-11-19 02:51:01.035522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:01.035544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:01.049880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:01.049910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.453 [2024-11-19 02:51:01.061188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.453 [2024-11-19 02:51:01.061218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.072593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.072623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.083462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.083492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.095015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.095044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.106275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.106303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.117436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.117464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.128107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.128135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.139072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.139099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.150502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.150530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.161280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.161308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.174245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.174272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.184233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.184260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.195108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.195135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.206133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.206160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.216845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.216872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.229306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.229333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.239387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.239415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.250107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.250140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.262177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.262205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.271474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.271501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.282921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.282947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.296732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.296759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.307056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.307083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.317767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.317795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.328622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.328649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.730 [2024-11-19 02:51:01.339417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.730 [2024-11-19 02:51:01.339445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.350736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.350765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.361749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.361777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.374647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.374674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.385311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.385339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.396258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.396285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.409017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.409068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.419105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.419131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.429776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.429803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.442211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.442237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.452686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.452723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.463345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.463372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.473925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.473951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.484268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.484294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.494681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.494715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.505624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.505650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.517940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.517967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.528046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.528085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.539028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.539064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.552475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.552502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.562866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.562895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.573789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.573818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.584874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.584903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.595842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.595871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.607046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.607074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.014 [2024-11-19 02:51:01.617914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.014 [2024-11-19 02:51:01.617946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.279 [2024-11-19 02:51:01.629000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.279 [2024-11-19 02:51:01.629028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.279 [2024-11-19 02:51:01.640899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.279 [2024-11-19 02:51:01.640928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.279 [2024-11-19 02:51:01.652120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.279 [2024-11-19 02:51:01.652148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.279 [2024-11-19 02:51:01.662924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.279 [2024-11-19 02:51:01.662952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.279 [2024-11-19 02:51:01.674039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.279 [2024-11-19 02:51:01.674066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.279 [2024-11-19 02:51:01.686567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.279 [2024-11-19 02:51:01.686594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.696815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.696842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.707177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.707204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.717813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.717840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.728295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.728322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.738767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.738794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.749653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.749681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.760244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.760270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.771046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.771072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.783461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.783488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.794085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.794112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.804425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.804452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.814956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.814983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.825146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.825172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.835735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.835763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.846292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.846318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.857084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.857111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.869508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.869535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.879577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.879610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.280 [2024-11-19 02:51:01.890338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.280 [2024-11-19 02:51:01.890365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 [2024-11-19 02:51:01.901165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:01.901195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 [2024-11-19 02:51:01.912522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:01.912550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 [2024-11-19 02:51:01.923458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:01.923485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 [2024-11-19 02:51:01.934741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:01.934768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 [2024-11-19 02:51:01.945523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:01.945550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 [2024-11-19 02:51:01.956572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:01.956600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 [2024-11-19 02:51:01.967604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:01.967631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 [2024-11-19 02:51:01.978873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:01.978914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 [2024-11-19 02:51:01.989977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:01.990003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 [2024-11-19 02:51:02.000897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:02.000924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 [2024-11-19 02:51:02.013728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:02.013755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 [2024-11-19 02:51:02.024040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:02.024067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 [2024-11-19 02:51:02.034715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:02.034742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 11613.00 IOPS, 90.73 MiB/s [2024-11-19T01:51:02.162Z] [2024-11-19 02:51:02.045344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:02.045370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 [2024-11-19 02:51:02.056117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:02.056145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 [2024-11-19 02:51:02.067196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:02.067223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 [2024-11-19 02:51:02.079937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:02.079963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.547 [2024-11-19 02:51:02.090086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.547 [2024-11-19 02:51:02.090122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.548 [2024-11-19 02:51:02.100857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.548 [2024-11-19 02:51:02.100884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.548 [2024-11-19 02:51:02.111025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.548 [2024-11-19 02:51:02.111051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.548 [2024-11-19 02:51:02.121385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.548 [2024-11-19 02:51:02.121411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.548 [2024-11-19 02:51:02.132833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.548 [2024-11-19 02:51:02.132860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.548 [2024-11-19 02:51:02.145720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.548 [2024-11-19 02:51:02.145746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.548 [2024-11-19 02:51:02.155998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.548 [2024-11-19 02:51:02.156025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.167391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.167422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.178064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.178091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.188955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.188998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.201737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.201764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.211658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.211684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.222339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.222365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.233178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.233205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.243914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.243941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.254299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.254325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.265097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.265124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.277634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.277661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.287474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.287500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.298196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.298231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.308768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.308795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.319588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.319615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.330059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.330086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.341044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.341071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.351837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.351864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.364510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.364537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.374569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.374596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.385176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.385203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.395791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.395818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.406809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.406836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.419488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.419515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.818 [2024-11-19 02:51:02.429585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.818 [2024-11-19 02:51:02.429613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.440663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.440703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.451216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.451242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.462372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.462399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.475130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.475157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.485547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.485574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.496311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.496338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.509911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.509938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.520365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.520392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.531132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.531158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.541867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.541893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.552679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.552715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.563413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.563439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.574185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.574211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.588483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.588510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.599184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.599210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.610103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.610130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.622977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.623004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.123 [2024-11-19 02:51:02.632364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.123 [2024-11-19 02:51:02.632392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.124 [2024-11-19 02:51:02.643836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.124 [2024-11-19 02:51:02.643864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.124 [2024-11-19 02:51:02.654774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.124 [2024-11-19 02:51:02.654802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.124 [2024-11-19 02:51:02.665568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.124 [2024-11-19 02:51:02.665595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.124 [2024-11-19 02:51:02.676323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.124 [2024-11-19 02:51:02.676351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.124 [2024-11-19 02:51:02.687170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.124 [2024-11-19 02:51:02.687214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.124 [2024-11-19 02:51:02.699782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.124 [2024-11-19 02:51:02.699811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.124 [2024-11-19 02:51:02.709325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.124 [2024-11-19 02:51:02.709354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.720647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.720702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.731478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.731507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.742358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.742385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.755445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.755473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.766172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.766201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.777255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.777282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.789962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.789989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.800082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.800108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.810935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.810961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.823588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.823615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.833937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.833964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.844876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.844904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.857666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.857701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.867737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.867763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.878039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.878065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.888805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.888831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.901106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.901132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.911226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.911252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.921424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.921451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.932014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.932041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.942226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.942254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.953254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.953281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.966588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.966616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.976837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.976864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:02.987868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:02.987896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:03.000650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:03.000677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:03.012637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.414 [2024-11-19 02:51:03.012664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.414 [2024-11-19 02:51:03.021732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.415 [2024-11-19 02:51:03.021775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.033957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.033987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 11711.50 IOPS, 91.50 MiB/s [2024-11-19T01:51:03.305Z] [2024-11-19 02:51:03.044898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.044925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.055951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.055978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.068608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.068634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.078771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.078798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.090054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.090082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.102724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.102764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.112833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.112859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.124213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.124241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.137115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.137150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.147268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.147294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.158105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.158131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.170672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.170707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.180736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.180763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.191806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.191834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.204144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.204172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.214435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.214461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.225238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.225265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.237170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.237197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.246067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.246093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.257901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.257927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.269054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.269081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.279715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.279741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.690 [2024-11-19 02:51:03.293227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.690 [2024-11-19 02:51:03.293254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.971 [2024-11-19 02:51:03.303973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.971 [2024-11-19 02:51:03.304002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.971 [2024-11-19 02:51:03.314961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.971 [2024-11-19 02:51:03.314990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.971 [2024-11-19 02:51:03.325679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.971 [2024-11-19 02:51:03.325716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.971 [2024-11-19 02:51:03.336259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.971 [2024-11-19 02:51:03.336287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.971 [2024-11-19 02:51:03.346878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.971 [2024-11-19 02:51:03.346913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.971 [2024-11-19 02:51:03.357573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.971 [2024-11-19 02:51:03.357600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.971 [2024-11-19 02:51:03.370113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.971 [2024-11-19 02:51:03.370140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.971 [2024-11-19 02:51:03.380478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.971 [2024-11-19 02:51:03.380504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.971 [2024-11-19 02:51:03.391300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.971 [2024-11-19 02:51:03.391328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.971 [2024-11-19 02:51:03.404008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.971 [2024-11-19 02:51:03.404035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.971 [2024-11-19 02:51:03.414021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.971 [2024-11-19 02:51:03.414047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.971 [2024-11-19 02:51:03.425132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.971 [2024-11-19 02:51:03.425160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.971 [2024-11-19 02:51:03.437429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.971 [2024-11-19 02:51:03.437456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.971 [2024-11-19 02:51:03.446766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.971 [2024-11-19 02:51:03.446800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.971 [2024-11-19 02:51:03.458765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.971 [2024-11-19 02:51:03.458793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.971 [2024-11-19 02:51:03.471235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.971 [2024-11-19 02:51:03.471262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.971 [2024-11-19 02:51:03.481594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.971 [2024-11-19 02:51:03.481621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.972 [2024-11-19 02:51:03.492006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.972 [2024-11-19 02:51:03.492033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.972 [2024-11-19 02:51:03.502617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.972 [2024-11-19 02:51:03.502643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.972 [2024-11-19 02:51:03.513217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.972 [2024-11-19 02:51:03.513244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.972 [2024-11-19 02:51:03.523994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.972 [2024-11-19 02:51:03.524020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.972 [2024-11-19 02:51:03.536944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.972 [2024-11-19 02:51:03.536971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.972 [2024-11-19 02:51:03.546910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.972 [2024-11-19 02:51:03.546936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.972 [2024-11-19 02:51:03.557197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.972 [2024-11-19 02:51:03.557230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.972 [2024-11-19 02:51:03.567825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.972 [2024-11-19 02:51:03.567852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.972 [2024-11-19 02:51:03.578566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.972 [2024-11-19 02:51:03.578595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.588899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.588927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.600000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.600028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.611230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.611258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.622447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.622473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.633178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.633204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.643972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.643998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.654670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.654707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.665230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.665256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.675798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.675825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.686686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.686723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.699615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.699642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.709943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.709970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.720947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.720974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.733271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.733298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.742680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.742714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.756017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.756044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.766535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.766571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.776974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.777002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.787583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.787610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.798285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.798312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.808650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.808677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.819246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.819273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.830036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.830063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.840482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.840510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.245 [2024-11-19 02:51:03.853475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.245 [2024-11-19 02:51:03.853504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:03.863958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:03.863985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:03.874836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:03.874864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:03.887516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:03.887543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:03.897906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:03.897933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:03.908628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:03.908655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:03.921544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:03.921571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:03.931757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:03.931783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:03.942962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:03.943004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:03.955137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:03.955164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:03.964852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:03.964878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:03.975475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:03.975501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:03.986630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:03.986658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:03.997651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:03.997701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:04.011067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:04.011095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:04.021939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:04.021966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:04.033093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:04.033119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:04.043731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:04.043758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 11753.33 IOPS, 91.82 MiB/s [2024-11-19T01:51:04.140Z] [2024-11-19 02:51:04.054296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:04.054323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:04.068227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:04.068254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:04.078703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:04.078736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:04.088879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:04.088906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:04.099630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:04.099657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:04.112231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:04.112257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-19 02:51:04.122432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-19 02:51:04.122458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.797 [2024-11-19 02:51:04.133451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.133482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.146560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.146588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.157178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.157205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.167722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.167749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.178465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.178492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.191678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.191714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.201628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.201654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.212225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.212252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.222810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.222837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.233538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.233565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.244315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.244342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.255078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.255105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.265831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.265858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.278223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.278249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.288369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.288395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.299196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.299223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.311802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.311829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.323374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.323400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.332583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.332609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.344465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.344493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.355285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.355311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.365518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.365545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.376388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.376415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.389177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.389215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.798 [2024-11-19 02:51:04.399327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.798 [2024-11-19 02:51:04.399358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.073 [2024-11-19 02:51:04.410730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.073 [2024-11-19 02:51:04.410759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.073 [2024-11-19 02:51:04.422229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.073 [2024-11-19 02:51:04.422258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.073 [2024-11-19 02:51:04.433553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.073 [2024-11-19 02:51:04.433580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.073 [2024-11-19 02:51:04.444254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.073 [2024-11-19 02:51:04.444281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.073 [2024-11-19 02:51:04.456788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.073 [2024-11-19 02:51:04.456815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.073 [2024-11-19 02:51:04.466927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.073 [2024-11-19 02:51:04.466954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.073 [2024-11-19 02:51:04.477560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.073 [2024-11-19 02:51:04.477587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.073 [2024-11-19 02:51:04.487867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.073 [2024-11-19 02:51:04.487895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.073 [2024-11-19 02:51:04.498748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.073 [2024-11-19 02:51:04.498775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.073 [2024-11-19 02:51:04.509523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.073 [2024-11-19 02:51:04.509551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.073 [2024-11-19 02:51:04.522029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.073 [2024-11-19 02:51:04.522056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.073 [2024-11-19 02:51:04.532003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.073 [2024-11-19 02:51:04.532030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.073 [2024-11-19 02:51:04.542426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.073 [2024-11-19 02:51:04.542454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.073 [2024-11-19 02:51:04.552770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.073 [2024-11-19 02:51:04.552797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.074 [2024-11-19 02:51:04.563486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.074 [2024-11-19 02:51:04.563514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.074 [2024-11-19 02:51:04.576097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.074 [2024-11-19 02:51:04.576125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.074 [2024-11-19 02:51:04.586232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.074 [2024-11-19 02:51:04.586259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.074 [2024-11-19 02:51:04.596737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.074 [2024-11-19 02:51:04.596774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.074 [2024-11-19 02:51:04.606943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.074 [2024-11-19 02:51:04.606971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.074 [2024-11-19 02:51:04.617532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.074 [2024-11-19 02:51:04.617559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.074 [2024-11-19 02:51:04.630089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.074 [2024-11-19 02:51:04.630117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.074 [2024-11-19 02:51:04.640375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.074 [2024-11-19 02:51:04.640402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.074 [2024-11-19 02:51:04.651093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.074 [2024-11-19 02:51:04.651120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.074 [2024-11-19 02:51:04.663612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.074 [2024-11-19 02:51:04.663640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.074 [2024-11-19 02:51:04.673788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.074 [2024-11-19 02:51:04.673815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.368 [2024-11-19 02:51:04.684556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.368 [2024-11-19 02:51:04.684585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.368 [2024-11-19 02:51:04.694982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.695016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.706060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.706088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.718393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.718420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.728339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.728366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.738635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.738662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.749328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.749355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.761539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.761566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.771611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.771638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.782041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.782068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.792671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.792706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.802928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.802961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.813477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.813503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.824088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.824115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.836774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.836801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.847079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.847106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.857477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.857503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.867766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.867793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.878416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.878459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.889031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.889062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.899657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.899684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.910597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.910625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.921468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.921495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.935981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.936009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.946494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.946522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.369 [2024-11-19 02:51:04.957595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.369 [2024-11-19 02:51:04.957624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 [2024-11-19 02:51:04.968741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:04.968771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 [2024-11-19 02:51:04.979505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:04.979542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 [2024-11-19 02:51:04.990244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:04.990272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 [2024-11-19 02:51:05.001461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:05.001490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 [2024-11-19 02:51:05.014625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:05.014662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 [2024-11-19 02:51:05.024771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:05.024798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 [2024-11-19 02:51:05.035886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:05.035913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 11789.25 IOPS, 92.10 MiB/s [2024-11-19T01:51:05.267Z] [2024-11-19 02:51:05.048931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:05.048958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 [2024-11-19 02:51:05.059447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:05.059476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 [2024-11-19 02:51:05.070381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:05.070408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 [2024-11-19 02:51:05.082883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:05.082912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 [2024-11-19 02:51:05.092214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:05.092242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 [2024-11-19 02:51:05.105118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:05.105145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 [2024-11-19 02:51:05.115063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:05.115090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 [2024-11-19 02:51:05.125504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:05.125532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 [2024-11-19 02:51:05.135976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:05.136003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 [2024-11-19 02:51:05.146830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:05.146857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.652 [2024-11-19 02:51:05.157514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.652 [2024-11-19 02:51:05.157541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.653 [2024-11-19 02:51:05.169889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.653 [2024-11-19 02:51:05.169916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.653 [2024-11-19 02:51:05.179999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.653 [2024-11-19 02:51:05.180026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.653 [2024-11-19 02:51:05.192132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.653 [2024-11-19 02:51:05.192160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.653 [2024-11-19 02:51:05.202116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.653 [2024-11-19 02:51:05.202143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.653 [2024-11-19 02:51:05.212297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.653 [2024-11-19 02:51:05.212324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.653 [2024-11-19 02:51:05.222771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.653 [2024-11-19 02:51:05.222798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.653 [2024-11-19 02:51:05.233252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.653 [2024-11-19 02:51:05.233279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.653 [2024-11-19 02:51:05.244015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.653 [2024-11-19 02:51:05.244042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.653 [2024-11-19 02:51:05.254412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.653 [2024-11-19 02:51:05.254439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.653 [2024-11-19 02:51:05.264852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.653 [2024-11-19 02:51:05.264879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.911 [2024-11-19 02:51:05.275509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.275537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.285998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.286026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.298630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.298658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.308603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.308631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.319404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.319432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.331996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.332024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.342048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.342076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.352712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.352739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.363220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.363248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.373972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.374014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.384594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.384622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.395408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.395436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.405880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.405907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.416358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.416385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.426869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.426896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.437777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.437805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.448400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.448427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.459275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.459302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.470194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.470222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.481654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.481681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.493825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.493853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.504183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.504211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.514527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.514554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-11-19 02:51:05.525333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-11-19 02:51:05.525362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.170 [2024-11-19 02:51:05.537850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.170 [2024-11-19 02:51:05.537878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.170 [2024-11-19 02:51:05.548146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.170 [2024-11-19 02:51:05.548174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.170 [2024-11-19 02:51:05.558665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.170 [2024-11-19 02:51:05.558702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.170 [2024-11-19 02:51:05.568980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.170 [2024-11-19 02:51:05.569007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.170 [2024-11-19 02:51:05.579337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.170 [2024-11-19 02:51:05.579364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.170 [2024-11-19 02:51:05.590003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.170 [2024-11-19 02:51:05.590031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.602436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.602463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.611785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.611812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.622912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.622940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.633754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.633781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.644827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.644854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.657036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.657064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.666900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.666928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.677612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.677640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.690132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.690159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.700102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.700130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.711087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.711114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.723470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.723498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.733080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.733107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.745766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.745793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.755901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.755929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.766145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.766172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.776338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.776366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.171 [2024-11-19 02:51:05.786907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.171 [2024-11-19 02:51:05.786934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.797289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.797317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.807554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.807582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.818206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.818233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.830537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.830572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.840791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.840819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.851236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.851264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.864043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.864073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.874509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.874537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.885046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.885073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.897929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.897957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.907627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.907654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.918125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.918152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.928854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.928881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.939704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.939742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.950656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.950683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.963092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.963120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.972896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.972922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.983343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.983370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:05.993877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:05.993905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:06.004765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:06.004793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:06.015079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:06.015108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:06.026086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:06.026114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:06.037039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:06.037073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.430 [2024-11-19 02:51:06.047814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.430 [2024-11-19 02:51:06.047842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 11830.40 IOPS, 92.42 MiB/s [2024-11-19T01:51:06.305Z] [2024-11-19 02:51:06.055993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.056020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 00:09:55.690 Latency(us) 00:09:55.690 [2024-11-19T01:51:06.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.690 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:55.690 Nvme1n1 : 5.01 11833.33 92.45 0.00 0.00 10803.77 4781.70 21068.61 00:09:55.690 [2024-11-19T01:51:06.305Z] =================================================================================================================== 00:09:55.690 [2024-11-19T01:51:06.305Z] Total : 11833.33 92.45 0.00 0.00 10803.77 4781.70 21068.61 00:09:55.690 [2024-11-19 02:51:06.062519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.062542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 [2024-11-19 02:51:06.070538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.070562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 [2024-11-19 02:51:06.078609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.078653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 [2024-11-19 02:51:06.086641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.086719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 [2024-11-19 02:51:06.094660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.094717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 [2024-11-19 02:51:06.102680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.102735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 [2024-11-19 02:51:06.110704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.110751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 [2024-11-19 02:51:06.118737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.118805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 [2024-11-19 02:51:06.126758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.126804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 [2024-11-19 02:51:06.134777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.134824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 [2024-11-19 02:51:06.142806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.142856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 [2024-11-19 02:51:06.150831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.150878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 [2024-11-19 02:51:06.158832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.158879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 [2024-11-19 02:51:06.166858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.166915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 [2024-11-19 02:51:06.174883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.174932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 [2024-11-19 02:51:06.182886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.182930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 [2024-11-19 02:51:06.190884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.690 [2024-11-19 02:51:06.190910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.690 [2024-11-19 02:51:06.198910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.691 [2024-11-19 02:51:06.198942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.691 [2024-11-19 02:51:06.206964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.691 [2024-11-19 02:51:06.207015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.691 [2024-11-19 02:51:06.214982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.691 [2024-11-19 02:51:06.215029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.691 [2024-11-19 02:51:06.222960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.691 [2024-11-19 02:51:06.222997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.691 [2024-11-19 02:51:06.230991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.691 [2024-11-19 02:51:06.231011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.691 [2024-11-19 02:51:06.239012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.691 [2024-11-19 02:51:06.239047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (147684) - No such process 00:09:55.691 02:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 147684 00:09:55.691 02:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.691 02:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.691 02:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:55.691 02:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.691 02:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:55.691 02:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.691 02:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:55.691 delay0 00:09:55.691 02:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.691 02:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:55.691 02:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.691 02:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:55.691 02:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.691 02:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:55.949 [2024-11-19 02:51:06.363498] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:02.513 Initializing NVMe Controllers 00:10:02.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:02.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:02.513 Initialization complete. Launching workers. 00:10:02.513 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 2641 00:10:02.513 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 2928, failed to submit 33 00:10:02.513 success 2739, unsuccessful 189, failed 0 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:02.513 rmmod nvme_tcp 00:10:02.513 rmmod nvme_fabrics 00:10:02.513 rmmod nvme_keyring 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 146268 ']' 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 146268 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 146268 ']' 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 146268 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146268 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146268' 00:10:02.513 killing process with pid 146268 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 146268 00:10:02.513 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 146268 00:10:02.513 02:51:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:02.513 02:51:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:02.513 02:51:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:02.513 02:51:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:02.513 02:51:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:02.513 02:51:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:02.513 02:51:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:02.513 02:51:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:02.513 02:51:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:02.513 02:51:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.513 02:51:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.513 02:51:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:05.054 00:10:05.054 real 0m28.154s 00:10:05.054 user 0m42.109s 00:10:05.054 sys 0m7.801s 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:05.054 ************************************ 00:10:05.054 END TEST nvmf_zcopy 00:10:05.054 ************************************ 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.054 ************************************ 00:10:05.054 START TEST nvmf_nmic 00:10:05.054 ************************************ 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:05.054 * Looking for test storage... 00:10:05.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.054 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:05.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.055 --rc genhtml_branch_coverage=1 00:10:05.055 --rc genhtml_function_coverage=1 00:10:05.055 --rc genhtml_legend=1 00:10:05.055 --rc geninfo_all_blocks=1 00:10:05.055 --rc geninfo_unexecuted_blocks=1 00:10:05.055 00:10:05.055 ' 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:05.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.055 --rc genhtml_branch_coverage=1 00:10:05.055 --rc genhtml_function_coverage=1 00:10:05.055 --rc genhtml_legend=1 00:10:05.055 --rc geninfo_all_blocks=1 00:10:05.055 --rc geninfo_unexecuted_blocks=1 00:10:05.055 00:10:05.055 ' 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:05.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.055 --rc genhtml_branch_coverage=1 00:10:05.055 --rc genhtml_function_coverage=1 00:10:05.055 --rc genhtml_legend=1 00:10:05.055 --rc geninfo_all_blocks=1 00:10:05.055 --rc geninfo_unexecuted_blocks=1 00:10:05.055 00:10:05.055 ' 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:05.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.055 --rc genhtml_branch_coverage=1 00:10:05.055 --rc genhtml_function_coverage=1 00:10:05.055 --rc genhtml_legend=1 00:10:05.055 --rc geninfo_all_blocks=1 00:10:05.055 --rc geninfo_unexecuted_blocks=1 00:10:05.055 00:10:05.055 ' 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.055 02:51:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.960 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.960 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.960 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.960 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.960 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:07.219 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:07.219 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:07.219 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:07.219 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:07.219 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:07.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:10:07.220 00:10:07.220 --- 10.0.0.2 ping statistics --- 00:10:07.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.220 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:10:07.220 00:10:07.220 --- 10.0.0.1 ping statistics --- 00:10:07.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.220 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=151693 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 151693 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 151693 ']' 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.220 02:51:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.220 [2024-11-19 02:51:17.832944] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:10:07.220 [2024-11-19 02:51:17.833032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.479 [2024-11-19 02:51:17.912216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.479 [2024-11-19 02:51:17.961766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.479 [2024-11-19 02:51:17.961830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.479 [2024-11-19 02:51:17.961844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.479 [2024-11-19 02:51:17.961855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.479 [2024-11-19 02:51:17.961865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.479 [2024-11-19 02:51:17.963408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.479 [2024-11-19 02:51:17.963468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.479 [2024-11-19 02:51:17.963535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.479 [2024-11-19 02:51:17.963537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.479 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.479 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:07.479 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:07.479 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.479 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.738 [2024-11-19 02:51:18.119538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.738 Malloc0 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.738 [2024-11-19 02:51:18.189633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.738 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:07.738 test case1: single bdev can't be used in multiple subsystems 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.739 [2024-11-19 02:51:18.213467] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:07.739 [2024-11-19 02:51:18.213498] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:07.739 [2024-11-19 02:51:18.213512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.739 request: 00:10:07.739 { 00:10:07.739 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:07.739 "namespace": { 00:10:07.739 "bdev_name": "Malloc0", 00:10:07.739 "no_auto_visible": false 00:10:07.739 }, 00:10:07.739 "method": "nvmf_subsystem_add_ns", 00:10:07.739 "req_id": 1 00:10:07.739 } 00:10:07.739 Got JSON-RPC error response 00:10:07.739 response: 00:10:07.739 { 00:10:07.739 "code": -32602, 00:10:07.739 "message": "Invalid parameters" 00:10:07.739 } 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:07.739 Adding namespace failed - expected result. 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:07.739 test case2: host connect to nvmf target in multiple paths 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.739 [2024-11-19 02:51:18.221580] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.739 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:08.304 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:08.870 02:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:08.870 02:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:08.870 02:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.870 02:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:08.870 02:51:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:11.397 02:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:11.397 02:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:11.397 02:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:11.397 02:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:11.397 02:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:11.397 02:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:11.397 02:51:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:11.397 [global] 00:10:11.397 thread=1 00:10:11.397 invalidate=1 00:10:11.397 rw=write 00:10:11.397 time_based=1 00:10:11.397 runtime=1 00:10:11.397 ioengine=libaio 00:10:11.397 direct=1 00:10:11.397 bs=4096 00:10:11.397 iodepth=1 00:10:11.397 norandommap=0 00:10:11.397 numjobs=1 00:10:11.397 00:10:11.397 verify_dump=1 00:10:11.397 verify_backlog=512 00:10:11.397 verify_state_save=0 00:10:11.397 do_verify=1 00:10:11.397 verify=crc32c-intel 00:10:11.397 [job0] 00:10:11.397 filename=/dev/nvme0n1 00:10:11.397 Could not set queue depth (nvme0n1) 00:10:11.397 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.397 fio-3.35 00:10:11.397 Starting 1 thread 00:10:12.770 00:10:12.770 job0: (groupid=0, jobs=1): err= 0: pid=152209: Tue Nov 19 02:51:23 2024 00:10:12.770 read: IOPS=1999, BW=7996KiB/s (8188kB/s)(8324KiB/1041msec) 00:10:12.770 slat (nsec): min=4492, max=54767, avg=9804.59, stdev=5819.99 00:10:12.770 clat (usec): min=169, max=40944, avg=266.49, stdev=1542.08 00:10:12.770 lat (usec): min=175, max=40978, avg=276.29, stdev=1542.92 00:10:12.770 clat percentiles (usec): 00:10:12.770 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 194], 00:10:12.770 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:10:12.770 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 225], 95.00th=[ 243], 00:10:12.770 | 99.00th=[ 326], 99.50th=[ 343], 99.90th=[40633], 99.95th=[40633], 00:10:12.770 | 99.99th=[41157] 00:10:12.770 write: IOPS=2459, BW=9837KiB/s (10.1MB/s)(10.0MiB/1041msec); 0 zone resets 00:10:12.770 slat (usec): min=6, max=28628, avg=24.23, stdev=565.59 00:10:12.770 clat (usec): min=123, max=332, avg=151.54, stdev=20.43 00:10:12.770 lat (usec): min=130, max=28825, avg=175.77, stdev=566.91 00:10:12.770 clat percentiles (usec): 00:10:12.770 | 1.00th=[ 127], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 139], 00:10:12.770 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 00:10:12.770 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 174], 95.00th=[ 186], 00:10:12.770 | 99.00th=[ 243], 99.50th=[ 273], 99.90th=[ 322], 99.95th=[ 322], 00:10:12.770 | 99.99th=[ 334] 00:10:12.770 bw ( KiB/s): min= 9864, max=10616, per=100.00%, avg=10240.00, stdev=531.74, samples=2 00:10:12.770 iops : min= 2466, max= 2654, avg=2560.00, stdev=132.94, samples=2 00:10:12.770 lat (usec) : 250=97.54%, 500=2.39% 00:10:12.770 lat (msec) : 50=0.06% 00:10:12.770 cpu : usr=2.79%, sys=5.38%, ctx=4645, majf=0, minf=1 00:10:12.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.770 issued rwts: total=2081,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.770 00:10:12.770 Run status group 0 (all jobs): 00:10:12.770 READ: bw=7996KiB/s (8188kB/s), 7996KiB/s-7996KiB/s (8188kB/s-8188kB/s), io=8324KiB (8524kB), run=1041-1041msec 00:10:12.770 WRITE: bw=9837KiB/s (10.1MB/s), 9837KiB/s-9837KiB/s (10.1MB/s-10.1MB/s), io=10.0MiB (10.5MB), run=1041-1041msec 00:10:12.770 00:10:12.770 Disk stats (read/write): 00:10:12.770 nvme0n1: ios=2074/2490, merge=0/0, ticks=1398/362, in_queue=1760, util=98.60% 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:12.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:12.771 rmmod nvme_tcp 00:10:12.771 rmmod nvme_fabrics 00:10:12.771 rmmod nvme_keyring 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 151693 ']' 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 151693 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 151693 ']' 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 151693 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 151693 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 151693' 00:10:12.771 killing process with pid 151693 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 151693 00:10:12.771 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 151693 00:10:13.031 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.031 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:13.031 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:13.031 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:13.031 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:13.031 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:13.031 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:13.031 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.031 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:13.031 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.031 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.031 02:51:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:15.571 00:10:15.571 real 0m10.384s 00:10:15.571 user 0m23.299s 00:10:15.571 sys 0m2.861s 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:15.571 ************************************ 00:10:15.571 END TEST nvmf_nmic 00:10:15.571 ************************************ 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.571 ************************************ 00:10:15.571 START TEST nvmf_fio_target 00:10:15.571 ************************************ 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:15.571 * Looking for test storage... 00:10:15.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:15.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.571 --rc genhtml_branch_coverage=1 00:10:15.571 --rc genhtml_function_coverage=1 00:10:15.571 --rc genhtml_legend=1 00:10:15.571 --rc geninfo_all_blocks=1 00:10:15.571 --rc geninfo_unexecuted_blocks=1 00:10:15.571 00:10:15.571 ' 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:15.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.571 --rc genhtml_branch_coverage=1 00:10:15.571 --rc genhtml_function_coverage=1 00:10:15.571 --rc genhtml_legend=1 00:10:15.571 --rc geninfo_all_blocks=1 00:10:15.571 --rc geninfo_unexecuted_blocks=1 00:10:15.571 00:10:15.571 ' 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:15.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.571 --rc genhtml_branch_coverage=1 00:10:15.571 --rc genhtml_function_coverage=1 00:10:15.571 --rc genhtml_legend=1 00:10:15.571 --rc geninfo_all_blocks=1 00:10:15.571 --rc geninfo_unexecuted_blocks=1 00:10:15.571 00:10:15.571 ' 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:15.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.571 --rc genhtml_branch_coverage=1 00:10:15.571 --rc genhtml_function_coverage=1 00:10:15.571 --rc genhtml_legend=1 00:10:15.571 --rc geninfo_all_blocks=1 00:10:15.571 --rc geninfo_unexecuted_blocks=1 00:10:15.571 00:10:15.571 ' 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.571 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:15.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:15.572 02:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.473 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.473 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:17.473 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:17.473 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:17.473 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:17.473 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:17.473 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:17.473 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:17.473 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:17.473 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:17.473 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:17.474 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:17.474 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:17.474 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:17.474 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:17.474 02:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.474 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.474 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.474 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:17.474 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.474 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.733 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.733 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:17.733 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:17.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:10:17.734 00:10:17.734 --- 10.0.0.2 ping statistics --- 00:10:17.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.734 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:10:17.734 00:10:17.734 --- 10.0.0.1 ping statistics --- 00:10:17.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.734 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=154419 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 154419 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 154419 ']' 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.734 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.734 [2024-11-19 02:51:28.188547] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:10:17.734 [2024-11-19 02:51:28.188644] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.734 [2024-11-19 02:51:28.259351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:17.734 [2024-11-19 02:51:28.303437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.734 [2024-11-19 02:51:28.303496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.734 [2024-11-19 02:51:28.303520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.734 [2024-11-19 02:51:28.303531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.734 [2024-11-19 02:51:28.303541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.734 [2024-11-19 02:51:28.305108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.734 [2024-11-19 02:51:28.305174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.734 [2024-11-19 02:51:28.305237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.734 [2024-11-19 02:51:28.305240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.993 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.993 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:17.993 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:17.993 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.993 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.993 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.993 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:18.251 [2024-11-19 02:51:28.753810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.251 02:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.509 02:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:18.509 02:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.769 02:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:18.769 02:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.334 02:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:19.334 02:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.334 02:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:19.335 02:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:19.901 02:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.159 02:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:20.159 02:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.417 02:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:20.417 02:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.675 02:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:20.675 02:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:20.932 02:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:21.190 02:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:21.190 02:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:21.448 02:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:21.448 02:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:21.705 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.964 [2024-11-19 02:51:32.447591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.964 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:22.221 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:22.479 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.045 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:23.045 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:23.045 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.045 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:23.045 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:23.045 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:25.572 02:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:25.572 02:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:25.572 02:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:25.572 02:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:25.573 02:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:25.573 02:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:25.573 02:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:25.573 [global] 00:10:25.573 thread=1 00:10:25.573 invalidate=1 00:10:25.573 rw=write 00:10:25.573 time_based=1 00:10:25.573 runtime=1 00:10:25.573 ioengine=libaio 00:10:25.573 direct=1 00:10:25.573 bs=4096 00:10:25.573 iodepth=1 00:10:25.573 norandommap=0 00:10:25.573 numjobs=1 00:10:25.573 00:10:25.573 verify_dump=1 00:10:25.573 verify_backlog=512 00:10:25.573 verify_state_save=0 00:10:25.573 do_verify=1 00:10:25.573 verify=crc32c-intel 00:10:25.573 [job0] 00:10:25.573 filename=/dev/nvme0n1 00:10:25.573 [job1] 00:10:25.573 filename=/dev/nvme0n2 00:10:25.573 [job2] 00:10:25.573 filename=/dev/nvme0n3 00:10:25.573 [job3] 00:10:25.573 filename=/dev/nvme0n4 00:10:25.573 Could not set queue depth (nvme0n1) 00:10:25.573 Could not set queue depth (nvme0n2) 00:10:25.573 Could not set queue depth (nvme0n3) 00:10:25.573 Could not set queue depth (nvme0n4) 00:10:25.573 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.573 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.573 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.573 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.573 fio-3.35 00:10:25.573 Starting 4 threads 00:10:26.505 00:10:26.505 job0: (groupid=0, jobs=1): err= 0: pid=155504: Tue Nov 19 02:51:37 2024 00:10:26.505 read: IOPS=1007, BW=4031KiB/s (4128kB/s)(4124KiB/1023msec) 00:10:26.505 slat (nsec): min=5658, max=45181, avg=12011.81, stdev=6309.28 00:10:26.505 clat (usec): min=199, max=41172, avg=641.51, stdev=3568.50 00:10:26.505 lat (usec): min=205, max=41181, avg=653.52, stdev=3568.82 00:10:26.505 clat percentiles (usec): 00:10:26.505 | 1.00th=[ 225], 5.00th=[ 241], 10.00th=[ 253], 20.00th=[ 277], 00:10:26.505 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 330], 00:10:26.505 | 70.00th=[ 347], 80.00th=[ 363], 90.00th=[ 408], 95.00th=[ 478], 00:10:26.505 | 99.00th=[ 594], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:26.505 | 99.99th=[41157] 00:10:26.505 write: IOPS=1501, BW=6006KiB/s (6150kB/s)(6144KiB/1023msec); 0 zone resets 00:10:26.505 slat (nsec): min=7107, max=50557, avg=14885.41, stdev=7484.98 00:10:26.505 clat (usec): min=138, max=408, avg=205.92, stdev=33.65 00:10:26.505 lat (usec): min=147, max=436, avg=220.80, stdev=32.67 00:10:26.505 clat percentiles (usec): 00:10:26.505 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 176], 00:10:26.505 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 202], 60.00th=[ 212], 00:10:26.505 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 258], 00:10:26.505 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 408], 99.95th=[ 408], 00:10:26.505 | 99.99th=[ 408] 00:10:26.505 bw ( KiB/s): min= 4096, max= 8192, per=22.82%, avg=6144.00, stdev=2896.31, samples=2 00:10:26.505 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:10:26.505 lat (usec) : 250=59.37%, 500=39.35%, 750=0.97% 00:10:26.505 lat (msec) : 50=0.31% 00:10:26.505 cpu : usr=2.74%, sys=4.40%, ctx=2567, majf=0, minf=1 00:10:26.505 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.505 issued rwts: total=1031,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.505 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.505 job1: (groupid=0, jobs=1): err= 0: pid=155505: Tue Nov 19 02:51:37 2024 00:10:26.505 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:26.505 slat (nsec): min=4460, max=63422, avg=11874.42, stdev=8297.76 00:10:26.505 clat (usec): min=175, max=611, avg=261.20, stdev=80.77 00:10:26.505 lat (usec): min=180, max=643, avg=273.07, stdev=85.36 00:10:26.505 clat percentiles (usec): 00:10:26.505 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:10:26.505 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 219], 60.00th=[ 265], 00:10:26.505 | 70.00th=[ 289], 80.00th=[ 334], 90.00th=[ 379], 95.00th=[ 420], 00:10:26.505 | 99.00th=[ 502], 99.50th=[ 545], 99.90th=[ 586], 99.95th=[ 603], 00:10:26.505 | 99.99th=[ 611] 00:10:26.505 write: IOPS=2276, BW=9107KiB/s (9325kB/s)(9116KiB/1001msec); 0 zone resets 00:10:26.505 slat (nsec): min=5678, max=44857, avg=11170.43, stdev=5080.88 00:10:26.505 clat (usec): min=124, max=1175, avg=175.90, stdev=44.73 00:10:26.505 lat (usec): min=131, max=1182, avg=187.07, stdev=45.33 00:10:26.505 clat percentiles (usec): 00:10:26.505 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 149], 00:10:26.505 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 165], 60.00th=[ 178], 00:10:26.505 | 70.00th=[ 194], 80.00th=[ 204], 90.00th=[ 217], 95.00th=[ 233], 00:10:26.505 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 750], 99.95th=[ 955], 00:10:26.505 | 99.99th=[ 1172] 00:10:26.505 bw ( KiB/s): min= 8192, max= 8192, per=30.42%, avg=8192.00, stdev= 0.00, samples=1 00:10:26.505 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:26.505 lat (usec) : 250=79.25%, 500=20.18%, 750=0.51%, 1000=0.05% 00:10:26.505 lat (msec) : 2=0.02% 00:10:26.505 cpu : usr=2.40%, sys=5.60%, ctx=4327, majf=0, minf=1 00:10:26.505 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.506 issued rwts: total=2048,2279,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.506 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.506 job2: (groupid=0, jobs=1): err= 0: pid=155506: Tue Nov 19 02:51:37 2024 00:10:26.506 read: IOPS=1267, BW=5071KiB/s (5193kB/s)(5076KiB/1001msec) 00:10:26.506 slat (nsec): min=6079, max=43072, avg=13919.07, stdev=6632.27 00:10:26.506 clat (usec): min=192, max=41327, avg=509.61, stdev=3017.44 00:10:26.506 lat (usec): min=198, max=41346, avg=523.53, stdev=3017.55 00:10:26.506 clat percentiles (usec): 00:10:26.506 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 231], 00:10:26.506 | 30.00th=[ 245], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 273], 00:10:26.506 | 70.00th=[ 285], 80.00th=[ 318], 90.00th=[ 412], 95.00th=[ 453], 00:10:26.506 | 99.00th=[ 562], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:10:26.506 | 99.99th=[41157] 00:10:26.506 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:26.506 slat (nsec): min=8051, max=54689, avg=13526.55, stdev=6681.17 00:10:26.506 clat (usec): min=147, max=351, avg=197.80, stdev=25.48 00:10:26.506 lat (usec): min=156, max=364, avg=211.32, stdev=27.10 00:10:26.506 clat percentiles (usec): 00:10:26.506 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 176], 00:10:26.506 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:10:26.506 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[ 241], 00:10:26.506 | 99.00th=[ 273], 99.50th=[ 297], 99.90th=[ 347], 99.95th=[ 351], 00:10:26.506 | 99.99th=[ 351] 00:10:26.506 bw ( KiB/s): min= 6552, max= 6552, per=24.33%, avg=6552.00, stdev= 0.00, samples=1 00:10:26.506 iops : min= 1638, max= 1638, avg=1638.00, stdev= 0.00, samples=1 00:10:26.506 lat (usec) : 250=68.66%, 500=30.05%, 750=1.03% 00:10:26.506 lat (msec) : 50=0.25% 00:10:26.506 cpu : usr=2.70%, sys=5.30%, ctx=2807, majf=0, minf=1 00:10:26.506 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.506 issued rwts: total=1269,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.506 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.506 job3: (groupid=0, jobs=1): err= 0: pid=155507: Tue Nov 19 02:51:37 2024 00:10:26.506 read: IOPS=1358, BW=5435KiB/s (5565kB/s)(5440KiB/1001msec) 00:10:26.506 slat (nsec): min=6304, max=38330, avg=14275.91, stdev=6112.91 00:10:26.506 clat (usec): min=195, max=41203, avg=488.24, stdev=2982.23 00:10:26.506 lat (usec): min=202, max=41226, avg=502.52, stdev=2982.28 00:10:26.506 clat percentiles (usec): 00:10:26.506 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 237], 00:10:26.506 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 258], 00:10:26.506 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 322], 00:10:26.506 | 99.00th=[ 537], 99.50th=[40109], 99.90th=[41157], 99.95th=[41157], 00:10:26.506 | 99.99th=[41157] 00:10:26.506 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:26.506 slat (usec): min=7, max=1009, avg=15.76, stdev=26.38 00:10:26.506 clat (usec): min=139, max=267, avg=182.59, stdev=20.80 00:10:26.506 lat (usec): min=149, max=1213, avg=198.35, stdev=35.92 00:10:26.506 clat percentiles (usec): 00:10:26.506 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 165], 00:10:26.506 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 00:10:26.506 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 221], 00:10:26.506 | 99.00th=[ 237], 99.50th=[ 247], 99.90th=[ 265], 99.95th=[ 269], 00:10:26.506 | 99.99th=[ 269] 00:10:26.506 bw ( KiB/s): min= 8192, max= 8192, per=30.42%, avg=8192.00, stdev= 0.00, samples=1 00:10:26.506 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:26.506 lat (usec) : 250=73.83%, 500=25.35%, 750=0.55% 00:10:26.506 lat (msec) : 50=0.28% 00:10:26.506 cpu : usr=2.80%, sys=5.90%, ctx=2898, majf=0, minf=1 00:10:26.506 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.506 issued rwts: total=1360,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.506 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.506 00:10:26.506 Run status group 0 (all jobs): 00:10:26.506 READ: bw=21.8MiB/s (22.9MB/s), 4031KiB/s-8184KiB/s (4128kB/s-8380kB/s), io=22.3MiB (23.4MB), run=1001-1023msec 00:10:26.506 WRITE: bw=26.3MiB/s (27.6MB/s), 6006KiB/s-9107KiB/s (6150kB/s-9325kB/s), io=26.9MiB (28.2MB), run=1001-1023msec 00:10:26.506 00:10:26.506 Disk stats (read/write): 00:10:26.506 nvme0n1: ios=1076/1536, merge=0/0, ticks=723/314, in_queue=1037, util=90.38% 00:10:26.506 nvme0n2: ios=1549/2024, merge=0/0, ticks=629/343, in_queue=972, util=90.83% 00:10:26.506 nvme0n3: ios=1055/1536, merge=0/0, ticks=1468/283, in_queue=1751, util=97.59% 00:10:26.506 nvme0n4: ios=1082/1257, merge=0/0, ticks=712/225, in_queue=937, util=97.67% 00:10:26.506 02:51:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:26.506 [global] 00:10:26.506 thread=1 00:10:26.506 invalidate=1 00:10:26.506 rw=randwrite 00:10:26.506 time_based=1 00:10:26.506 runtime=1 00:10:26.506 ioengine=libaio 00:10:26.506 direct=1 00:10:26.506 bs=4096 00:10:26.506 iodepth=1 00:10:26.506 norandommap=0 00:10:26.506 numjobs=1 00:10:26.506 00:10:26.506 verify_dump=1 00:10:26.506 verify_backlog=512 00:10:26.506 verify_state_save=0 00:10:26.506 do_verify=1 00:10:26.506 verify=crc32c-intel 00:10:26.506 [job0] 00:10:26.506 filename=/dev/nvme0n1 00:10:26.506 [job1] 00:10:26.506 filename=/dev/nvme0n2 00:10:26.764 [job2] 00:10:26.764 filename=/dev/nvme0n3 00:10:26.764 [job3] 00:10:26.764 filename=/dev/nvme0n4 00:10:26.764 Could not set queue depth (nvme0n1) 00:10:26.764 Could not set queue depth (nvme0n2) 00:10:26.764 Could not set queue depth (nvme0n3) 00:10:26.764 Could not set queue depth (nvme0n4) 00:10:26.764 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.764 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.764 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.764 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.764 fio-3.35 00:10:26.764 Starting 4 threads 00:10:28.135 00:10:28.135 job0: (groupid=0, jobs=1): err= 0: pid=155731: Tue Nov 19 02:51:38 2024 00:10:28.135 read: IOPS=1832, BW=7328KiB/s (7504kB/s)(7328KiB/1000msec) 00:10:28.135 slat (nsec): min=5128, max=73866, avg=17848.37, stdev=10392.70 00:10:28.135 clat (usec): min=183, max=595, avg=314.41, stdev=85.92 00:10:28.135 lat (usec): min=193, max=611, avg=332.26, stdev=88.94 00:10:28.135 clat percentiles (usec): 00:10:28.135 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 212], 20.00th=[ 229], 00:10:28.135 | 30.00th=[ 251], 40.00th=[ 281], 50.00th=[ 306], 60.00th=[ 338], 00:10:28.135 | 70.00th=[ 359], 80.00th=[ 383], 90.00th=[ 429], 95.00th=[ 486], 00:10:28.135 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 586], 99.95th=[ 594], 00:10:28.135 | 99.99th=[ 594] 00:10:28.135 write: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec); 0 zone resets 00:10:28.135 slat (nsec): min=7417, max=48244, avg=12936.76, stdev=5539.09 00:10:28.135 clat (usec): min=117, max=335, avg=169.58, stdev=22.94 00:10:28.135 lat (usec): min=125, max=348, avg=182.52, stdev=23.67 00:10:28.135 clat percentiles (usec): 00:10:28.135 | 1.00th=[ 123], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 149], 00:10:28.135 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 176], 00:10:28.135 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 204], 00:10:28.135 | 99.00th=[ 223], 99.50th=[ 241], 99.90th=[ 302], 99.95th=[ 330], 00:10:28.135 | 99.99th=[ 334] 00:10:28.135 bw ( KiB/s): min= 8192, max= 8192, per=34.53%, avg=8192.00, stdev= 0.00, samples=1 00:10:28.135 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:28.135 lat (usec) : 250=66.57%, 500=32.04%, 750=1.39% 00:10:28.135 cpu : usr=3.50%, sys=5.90%, ctx=3881, majf=0, minf=1 00:10:28.135 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.135 issued rwts: total=1832,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.135 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.135 job1: (groupid=0, jobs=1): err= 0: pid=155732: Tue Nov 19 02:51:38 2024 00:10:28.135 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:28.135 slat (nsec): min=6453, max=64177, avg=14088.71, stdev=6769.67 00:10:28.136 clat (usec): min=196, max=40994, avg=382.05, stdev=1465.41 00:10:28.136 lat (usec): min=203, max=41003, avg=396.14, stdev=1465.25 00:10:28.136 clat percentiles (usec): 00:10:28.136 | 1.00th=[ 212], 5.00th=[ 237], 10.00th=[ 262], 20.00th=[ 281], 00:10:28.136 | 30.00th=[ 293], 40.00th=[ 310], 50.00th=[ 330], 60.00th=[ 343], 00:10:28.136 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 392], 95.00th=[ 469], 00:10:28.136 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[40633], 99.95th=[41157], 00:10:28.136 | 99.99th=[41157] 00:10:28.136 write: IOPS=1689, BW=6757KiB/s (6919kB/s)(6764KiB/1001msec); 0 zone resets 00:10:28.136 slat (usec): min=7, max=104, avg=18.51, stdev= 8.44 00:10:28.136 clat (usec): min=130, max=983, avg=204.17, stdev=51.18 00:10:28.136 lat (usec): min=138, max=999, avg=222.69, stdev=53.65 00:10:28.136 clat percentiles (usec): 00:10:28.136 | 1.00th=[ 137], 5.00th=[ 147], 10.00th=[ 155], 20.00th=[ 169], 00:10:28.136 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 202], 00:10:28.136 | 70.00th=[ 217], 80.00th=[ 239], 90.00th=[ 269], 95.00th=[ 302], 00:10:28.136 | 99.00th=[ 367], 99.50th=[ 392], 99.90th=[ 437], 99.95th=[ 988], 00:10:28.136 | 99.99th=[ 988] 00:10:28.136 bw ( KiB/s): min= 8192, max= 8192, per=34.53%, avg=8192.00, stdev= 0.00, samples=1 00:10:28.136 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:28.136 lat (usec) : 250=48.53%, 500=50.08%, 750=1.30%, 1000=0.03% 00:10:28.136 lat (msec) : 50=0.06% 00:10:28.136 cpu : usr=3.20%, sys=7.60%, ctx=3228, majf=0, minf=1 00:10:28.136 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.136 issued rwts: total=1536,1691,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.136 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.136 job2: (groupid=0, jobs=1): err= 0: pid=155733: Tue Nov 19 02:51:38 2024 00:10:28.136 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:10:28.136 slat (nsec): min=10577, max=18940, avg=16531.23, stdev=2302.72 00:10:28.136 clat (usec): min=2646, max=41952, avg=38757.96, stdev=8596.12 00:10:28.136 lat (usec): min=2665, max=41970, avg=38774.49, stdev=8595.70 00:10:28.136 clat percentiles (usec): 00:10:28.136 | 1.00th=[ 2638], 5.00th=[27132], 10.00th=[41157], 20.00th=[41157], 00:10:28.136 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:28.136 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:10:28.136 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:28.136 | 99.99th=[42206] 00:10:28.136 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:10:28.136 slat (nsec): min=8674, max=68260, avg=23172.07, stdev=9624.85 00:10:28.136 clat (usec): min=149, max=541, avg=295.55, stdev=93.52 00:10:28.136 lat (usec): min=159, max=567, avg=318.72, stdev=96.36 00:10:28.136 clat percentiles (usec): 00:10:28.136 | 1.00th=[ 176], 5.00th=[ 200], 10.00th=[ 210], 20.00th=[ 221], 00:10:28.136 | 30.00th=[ 231], 40.00th=[ 245], 50.00th=[ 258], 60.00th=[ 277], 00:10:28.136 | 70.00th=[ 330], 80.00th=[ 392], 90.00th=[ 453], 95.00th=[ 490], 00:10:28.136 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 545], 99.95th=[ 545], 00:10:28.136 | 99.99th=[ 545] 00:10:28.136 bw ( KiB/s): min= 4096, max= 4096, per=17.27%, avg=4096.00, stdev= 0.00, samples=1 00:10:28.136 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:28.136 lat (usec) : 250=43.82%, 500=48.31%, 750=3.75% 00:10:28.136 lat (msec) : 4=0.19%, 50=3.93% 00:10:28.136 cpu : usr=1.18%, sys=1.18%, ctx=534, majf=0, minf=1 00:10:28.136 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.136 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.136 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.136 job3: (groupid=0, jobs=1): err= 0: pid=155734: Tue Nov 19 02:51:38 2024 00:10:28.136 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:28.136 slat (nsec): min=7262, max=57756, avg=12605.37, stdev=5928.11 00:10:28.136 clat (usec): min=204, max=40986, avg=317.24, stdev=1461.96 00:10:28.136 lat (usec): min=212, max=40996, avg=329.84, stdev=1461.91 00:10:28.136 clat percentiles (usec): 00:10:28.136 | 1.00th=[ 215], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 241], 00:10:28.136 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:10:28.136 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 310], 00:10:28.136 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[40633], 99.95th=[41157], 00:10:28.136 | 99.99th=[41157] 00:10:28.136 write: IOPS=1796, BW=7185KiB/s (7357kB/s)(7192KiB/1001msec); 0 zone resets 00:10:28.136 slat (nsec): min=7541, max=80738, avg=20629.09, stdev=10885.01 00:10:28.136 clat (usec): min=145, max=1224, avg=245.21, stdev=66.15 00:10:28.136 lat (usec): min=158, max=1234, avg=265.83, stdev=70.82 00:10:28.136 clat percentiles (usec): 00:10:28.136 | 1.00th=[ 161], 5.00th=[ 178], 10.00th=[ 190], 20.00th=[ 200], 00:10:28.136 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 235], 00:10:28.136 | 70.00th=[ 245], 80.00th=[ 297], 90.00th=[ 347], 95.00th=[ 375], 00:10:28.136 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 922], 99.95th=[ 1221], 00:10:28.136 | 99.99th=[ 1221] 00:10:28.136 bw ( KiB/s): min= 8192, max= 8192, per=34.53%, avg=8192.00, stdev= 0.00, samples=1 00:10:28.136 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:28.136 lat (usec) : 250=54.98%, 500=44.81%, 750=0.09%, 1000=0.03% 00:10:28.136 lat (msec) : 2=0.03%, 50=0.06% 00:10:28.136 cpu : usr=3.80%, sys=7.40%, ctx=3335, majf=0, minf=1 00:10:28.136 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.136 issued rwts: total=1536,1798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.136 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.136 00:10:28.136 Run status group 0 (all jobs): 00:10:28.136 READ: bw=18.9MiB/s (19.8MB/s), 86.3KiB/s-7328KiB/s (88.3kB/s-7504kB/s), io=19.2MiB (20.2MB), run=1000-1020msec 00:10:28.136 WRITE: bw=23.2MiB/s (24.3MB/s), 2008KiB/s-8192KiB/s (2056kB/s-8389kB/s), io=23.6MiB (24.8MB), run=1000-1020msec 00:10:28.136 00:10:28.136 Disk stats (read/write): 00:10:28.136 nvme0n1: ios=1588/1891, merge=0/0, ticks=872/307, in_queue=1179, util=90.08% 00:10:28.136 nvme0n2: ios=1255/1536, merge=0/0, ticks=1415/300, in_queue=1715, util=94.11% 00:10:28.136 nvme0n3: ios=74/512, merge=0/0, ticks=752/143, in_queue=895, util=94.80% 00:10:28.136 nvme0n4: ios=1367/1536, merge=0/0, ticks=732/359, in_queue=1091, util=98.85% 00:10:28.136 02:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:28.136 [global] 00:10:28.136 thread=1 00:10:28.136 invalidate=1 00:10:28.136 rw=write 00:10:28.136 time_based=1 00:10:28.136 runtime=1 00:10:28.136 ioengine=libaio 00:10:28.136 direct=1 00:10:28.136 bs=4096 00:10:28.136 iodepth=128 00:10:28.136 norandommap=0 00:10:28.136 numjobs=1 00:10:28.136 00:10:28.136 verify_dump=1 00:10:28.136 verify_backlog=512 00:10:28.136 verify_state_save=0 00:10:28.136 do_verify=1 00:10:28.136 verify=crc32c-intel 00:10:28.136 [job0] 00:10:28.136 filename=/dev/nvme0n1 00:10:28.136 [job1] 00:10:28.136 filename=/dev/nvme0n2 00:10:28.136 [job2] 00:10:28.136 filename=/dev/nvme0n3 00:10:28.136 [job3] 00:10:28.136 filename=/dev/nvme0n4 00:10:28.136 Could not set queue depth (nvme0n1) 00:10:28.136 Could not set queue depth (nvme0n2) 00:10:28.136 Could not set queue depth (nvme0n3) 00:10:28.136 Could not set queue depth (nvme0n4) 00:10:28.394 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.394 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.394 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.394 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.394 fio-3.35 00:10:28.394 Starting 4 threads 00:10:29.775 00:10:29.775 job0: (groupid=0, jobs=1): err= 0: pid=155966: Tue Nov 19 02:51:40 2024 00:10:29.775 read: IOPS=3172, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1004msec) 00:10:29.775 slat (usec): min=3, max=15279, avg=147.13, stdev=835.44 00:10:29.775 clat (usec): min=3278, max=61672, avg=19207.46, stdev=9562.64 00:10:29.775 lat (usec): min=6039, max=69832, avg=19354.59, stdev=9641.53 00:10:29.775 clat percentiles (usec): 00:10:29.775 | 1.00th=[ 8586], 5.00th=[11600], 10.00th=[11731], 20.00th=[12125], 00:10:29.775 | 30.00th=[12518], 40.00th=[12911], 50.00th=[16712], 60.00th=[19268], 00:10:29.775 | 70.00th=[20055], 80.00th=[23462], 90.00th=[33817], 95.00th=[40109], 00:10:29.775 | 99.00th=[50594], 99.50th=[53740], 99.90th=[61604], 99.95th=[61604], 00:10:29.775 | 99.99th=[61604] 00:10:29.775 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:10:29.775 slat (usec): min=3, max=9533, avg=137.27, stdev=602.87 00:10:29.775 clat (usec): min=9019, max=61303, avg=18379.95, stdev=7820.01 00:10:29.775 lat (usec): min=9043, max=61309, avg=18517.22, stdev=7874.65 00:10:29.775 clat percentiles (usec): 00:10:29.775 | 1.00th=[ 9634], 5.00th=[11076], 10.00th=[11863], 20.00th=[12518], 00:10:29.775 | 30.00th=[12911], 40.00th=[15008], 50.00th=[16909], 60.00th=[19792], 00:10:29.775 | 70.00th=[20841], 80.00th=[22938], 90.00th=[24249], 95.00th=[26870], 00:10:29.775 | 99.00th=[54789], 99.50th=[57934], 99.90th=[61080], 99.95th=[61080], 00:10:29.775 | 99.99th=[61080] 00:10:29.775 bw ( KiB/s): min=12288, max=16272, per=21.60%, avg=14280.00, stdev=2817.11, samples=2 00:10:29.775 iops : min= 3072, max= 4068, avg=3570.00, stdev=704.28, samples=2 00:10:29.775 lat (msec) : 4=0.01%, 10=2.75%, 20=62.65%, 50=33.20%, 100=1.39% 00:10:29.775 cpu : usr=4.59%, sys=8.47%, ctx=423, majf=0, minf=1 00:10:29.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:29.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.775 issued rwts: total=3185,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.775 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.775 job1: (groupid=0, jobs=1): err= 0: pid=155967: Tue Nov 19 02:51:40 2024 00:10:29.775 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:29.775 slat (usec): min=2, max=10932, avg=85.74, stdev=494.46 00:10:29.775 clat (usec): min=5197, max=32687, avg=11948.84, stdev=2914.39 00:10:29.775 lat (usec): min=5205, max=32701, avg=12034.58, stdev=2928.14 00:10:29.775 clat percentiles (usec): 00:10:29.775 | 1.00th=[ 5866], 5.00th=[ 7570], 10.00th=[ 9110], 20.00th=[10552], 00:10:29.775 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:10:29.775 | 70.00th=[12387], 80.00th=[12649], 90.00th=[14484], 95.00th=[16909], 00:10:29.775 | 99.00th=[21365], 99.50th=[26870], 99.90th=[32637], 99.95th=[32637], 00:10:29.775 | 99.99th=[32637] 00:10:29.775 write: IOPS=4801, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1003msec); 0 zone resets 00:10:29.775 slat (usec): min=3, max=8785, avg=107.29, stdev=528.26 00:10:29.775 clat (usec): min=772, max=63707, avg=14978.90, stdev=11760.76 00:10:29.775 lat (usec): min=780, max=63713, avg=15086.18, stdev=11841.69 00:10:29.775 clat percentiles (usec): 00:10:29.775 | 1.00th=[ 5276], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10683], 00:10:29.775 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:10:29.775 | 70.00th=[11994], 80.00th=[12387], 90.00th=[16319], 95.00th=[53216], 00:10:29.775 | 99.00th=[60556], 99.50th=[61604], 99.90th=[63701], 99.95th=[63701], 00:10:29.775 | 99.99th=[63701] 00:10:29.775 bw ( KiB/s): min=16424, max=21088, per=28.37%, avg=18756.00, stdev=3297.95, samples=2 00:10:29.775 iops : min= 4106, max= 5272, avg=4689.00, stdev=824.49, samples=2 00:10:29.775 lat (usec) : 1000=0.04% 00:10:29.775 lat (msec) : 4=0.18%, 10=12.56%, 20=81.25%, 50=2.92%, 100=3.05% 00:10:29.775 cpu : usr=7.39%, sys=10.98%, ctx=509, majf=0, minf=1 00:10:29.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:29.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.775 issued rwts: total=4608,4816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.775 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.775 job2: (groupid=0, jobs=1): err= 0: pid=155968: Tue Nov 19 02:51:40 2024 00:10:29.775 read: IOPS=3922, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1003msec) 00:10:29.775 slat (usec): min=2, max=16271, avg=118.89, stdev=773.01 00:10:29.775 clat (usec): min=1433, max=56544, avg=15334.61, stdev=4947.49 00:10:29.775 lat (usec): min=4578, max=56553, avg=15453.51, stdev=4997.10 00:10:29.775 clat percentiles (usec): 00:10:29.775 | 1.00th=[ 7308], 5.00th=[ 8848], 10.00th=[11338], 20.00th=[12780], 00:10:29.775 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[15139], 00:10:29.775 | 70.00th=[15926], 80.00th=[17171], 90.00th=[20055], 95.00th=[23200], 00:10:29.775 | 99.00th=[35914], 99.50th=[35914], 99.90th=[53216], 99.95th=[56361], 00:10:29.775 | 99.99th=[56361] 00:10:29.775 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:10:29.775 slat (usec): min=3, max=10525, avg=117.25, stdev=660.11 00:10:29.775 clat (usec): min=722, max=34860, avg=16243.17, stdev=5650.22 00:10:29.775 lat (usec): min=728, max=34878, avg=16360.42, stdev=5708.65 00:10:29.775 clat percentiles (usec): 00:10:29.775 | 1.00th=[ 3490], 5.00th=[ 9110], 10.00th=[11600], 20.00th=[12649], 00:10:29.775 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13829], 60.00th=[14353], 00:10:29.775 | 70.00th=[17695], 80.00th=[23200], 90.00th=[25035], 95.00th=[26346], 00:10:29.775 | 99.00th=[28443], 99.50th=[30278], 99.90th=[34866], 99.95th=[34866], 00:10:29.775 | 99.99th=[34866] 00:10:29.776 bw ( KiB/s): min=16384, max=16384, per=24.79%, avg=16384.00, stdev= 0.00, samples=2 00:10:29.776 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:10:29.776 lat (usec) : 750=0.02%, 1000=0.09% 00:10:29.776 lat (msec) : 2=0.01%, 4=0.62%, 10=4.78%, 20=76.03%, 50=18.23% 00:10:29.776 lat (msec) : 100=0.21% 00:10:29.776 cpu : usr=3.99%, sys=5.69%, ctx=331, majf=0, minf=1 00:10:29.776 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:29.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.776 issued rwts: total=3934,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.776 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.776 job3: (groupid=0, jobs=1): err= 0: pid=155969: Tue Nov 19 02:51:40 2024 00:10:29.776 read: IOPS=3730, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1004msec) 00:10:29.776 slat (usec): min=3, max=7915, avg=124.44, stdev=628.27 00:10:29.776 clat (usec): min=815, max=37819, avg=15993.02, stdev=4222.75 00:10:29.776 lat (usec): min=5251, max=37835, avg=16117.46, stdev=4229.85 00:10:29.776 clat percentiles (usec): 00:10:29.776 | 1.00th=[ 9503], 5.00th=[11863], 10.00th=[12387], 20.00th=[13304], 00:10:29.776 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14484], 60.00th=[15008], 00:10:29.776 | 70.00th=[15533], 80.00th=[19792], 90.00th=[21890], 95.00th=[24511], 00:10:29.776 | 99.00th=[32113], 99.50th=[34341], 99.90th=[38011], 99.95th=[38011], 00:10:29.776 | 99.99th=[38011] 00:10:29.776 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:10:29.776 slat (usec): min=4, max=9469, avg=118.32, stdev=599.90 00:10:29.776 clat (usec): min=10225, max=37434, avg=16052.73, stdev=4538.21 00:10:29.776 lat (usec): min=10458, max=37446, avg=16171.05, stdev=4554.90 00:10:29.776 clat percentiles (usec): 00:10:29.776 | 1.00th=[11338], 5.00th=[11731], 10.00th=[12256], 20.00th=[12518], 00:10:29.776 | 30.00th=[13173], 40.00th=[14222], 50.00th=[14746], 60.00th=[15270], 00:10:29.776 | 70.00th=[16057], 80.00th=[18744], 90.00th=[22152], 95.00th=[26870], 00:10:29.776 | 99.00th=[32113], 99.50th=[33817], 99.90th=[37487], 99.95th=[37487], 00:10:29.776 | 99.99th=[37487] 00:10:29.776 bw ( KiB/s): min=15360, max=17408, per=24.79%, avg=16384.00, stdev=1448.15, samples=2 00:10:29.776 iops : min= 3840, max= 4352, avg=4096.00, stdev=362.04, samples=2 00:10:29.776 lat (usec) : 1000=0.01% 00:10:29.776 lat (msec) : 10=0.68%, 20=82.72%, 50=16.59% 00:10:29.776 cpu : usr=7.08%, sys=9.47%, ctx=443, majf=0, minf=1 00:10:29.776 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:29.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.776 issued rwts: total=3745,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.776 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.776 00:10:29.776 Run status group 0 (all jobs): 00:10:29.776 READ: bw=60.2MiB/s (63.1MB/s), 12.4MiB/s-17.9MiB/s (13.0MB/s-18.8MB/s), io=60.4MiB (63.4MB), run=1003-1004msec 00:10:29.776 WRITE: bw=64.6MiB/s (67.7MB/s), 13.9MiB/s-18.8MiB/s (14.6MB/s-19.7MB/s), io=64.8MiB (68.0MB), run=1003-1004msec 00:10:29.776 00:10:29.776 Disk stats (read/write): 00:10:29.776 nvme0n1: ios=2585/2933, merge=0/0, ticks=17057/17658, in_queue=34715, util=93.29% 00:10:29.776 nvme0n2: ios=3875/4096, merge=0/0, ticks=19964/36692, in_queue=56656, util=96.65% 00:10:29.776 nvme0n3: ios=3095/3532, merge=0/0, ticks=28907/32642, in_queue=61549, util=99.06% 00:10:29.776 nvme0n4: ios=3238/3584, merge=0/0, ticks=13769/13813, in_queue=27582, util=97.91% 00:10:29.776 02:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:29.776 [global] 00:10:29.776 thread=1 00:10:29.776 invalidate=1 00:10:29.776 rw=randwrite 00:10:29.776 time_based=1 00:10:29.776 runtime=1 00:10:29.776 ioengine=libaio 00:10:29.776 direct=1 00:10:29.776 bs=4096 00:10:29.776 iodepth=128 00:10:29.776 norandommap=0 00:10:29.776 numjobs=1 00:10:29.776 00:10:29.776 verify_dump=1 00:10:29.776 verify_backlog=512 00:10:29.776 verify_state_save=0 00:10:29.776 do_verify=1 00:10:29.776 verify=crc32c-intel 00:10:29.776 [job0] 00:10:29.776 filename=/dev/nvme0n1 00:10:29.776 [job1] 00:10:29.776 filename=/dev/nvme0n2 00:10:29.776 [job2] 00:10:29.776 filename=/dev/nvme0n3 00:10:29.776 [job3] 00:10:29.776 filename=/dev/nvme0n4 00:10:29.776 Could not set queue depth (nvme0n1) 00:10:29.776 Could not set queue depth (nvme0n2) 00:10:29.776 Could not set queue depth (nvme0n3) 00:10:29.776 Could not set queue depth (nvme0n4) 00:10:29.776 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.776 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.776 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.776 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.776 fio-3.35 00:10:29.776 Starting 4 threads 00:10:31.151 00:10:31.151 job0: (groupid=0, jobs=1): err= 0: pid=156317: Tue Nov 19 02:51:41 2024 00:10:31.151 read: IOPS=2785, BW=10.9MiB/s (11.4MB/s)(11.0MiB/1007msec) 00:10:31.151 slat (usec): min=3, max=15958, avg=137.97, stdev=855.66 00:10:31.151 clat (usec): min=1786, max=39622, avg=17790.64, stdev=5251.19 00:10:31.151 lat (usec): min=7025, max=39657, avg=17928.61, stdev=5311.35 00:10:31.151 clat percentiles (usec): 00:10:31.151 | 1.00th=[ 7308], 5.00th=[10945], 10.00th=[11994], 20.00th=[13829], 00:10:31.151 | 30.00th=[15401], 40.00th=[15664], 50.00th=[16450], 60.00th=[17957], 00:10:31.151 | 70.00th=[19530], 80.00th=[22152], 90.00th=[25035], 95.00th=[28443], 00:10:31.151 | 99.00th=[30278], 99.50th=[34866], 99.90th=[34866], 99.95th=[36963], 00:10:31.151 | 99.99th=[39584] 00:10:31.151 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:10:31.151 slat (usec): min=3, max=15539, avg=191.01, stdev=1058.52 00:10:31.151 clat (usec): min=4780, max=75991, avg=25216.89, stdev=10505.47 00:10:31.151 lat (usec): min=4793, max=76006, avg=25407.90, stdev=10595.04 00:10:31.151 clat percentiles (usec): 00:10:31.151 | 1.00th=[ 4883], 5.00th=[12256], 10.00th=[15270], 20.00th=[19006], 00:10:31.151 | 30.00th=[21627], 40.00th=[22676], 50.00th=[22938], 60.00th=[25560], 00:10:31.151 | 70.00th=[25822], 80.00th=[28705], 90.00th=[36963], 95.00th=[44303], 00:10:31.151 | 99.00th=[70779], 99.50th=[71828], 99.90th=[76022], 99.95th=[76022], 00:10:31.151 | 99.99th=[76022] 00:10:31.151 bw ( KiB/s): min=12288, max=12288, per=21.49%, avg=12288.00, stdev= 0.00, samples=2 00:10:31.151 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:31.151 lat (msec) : 2=0.02%, 10=3.11%, 20=42.86%, 50=51.86%, 100=2.14% 00:10:31.151 cpu : usr=4.27%, sys=5.67%, ctx=267, majf=0, minf=1 00:10:31.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:31.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.151 issued rwts: total=2805,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.151 job1: (groupid=0, jobs=1): err= 0: pid=156318: Tue Nov 19 02:51:41 2024 00:10:31.151 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:10:31.151 slat (usec): min=2, max=13140, avg=131.15, stdev=844.18 00:10:31.151 clat (usec): min=6979, max=48155, avg=16747.06, stdev=6133.12 00:10:31.151 lat (usec): min=7460, max=48169, avg=16878.21, stdev=6212.77 00:10:31.151 clat percentiles (usec): 00:10:31.151 | 1.00th=[ 8356], 5.00th=[10421], 10.00th=[11076], 20.00th=[11600], 00:10:31.151 | 30.00th=[12256], 40.00th=[13042], 50.00th=[15664], 60.00th=[16581], 00:10:31.151 | 70.00th=[19268], 80.00th=[22414], 90.00th=[24249], 95.00th=[27657], 00:10:31.151 | 99.00th=[39584], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:10:31.151 | 99.99th=[47973] 00:10:31.151 write: IOPS=3497, BW=13.7MiB/s (14.3MB/s)(13.8MiB/1007msec); 0 zone resets 00:10:31.151 slat (usec): min=3, max=15331, avg=161.33, stdev=991.03 00:10:31.151 clat (usec): min=6306, max=50696, avg=21357.45, stdev=9114.97 00:10:31.151 lat (usec): min=6682, max=50737, avg=21518.78, stdev=9211.16 00:10:31.151 clat percentiles (usec): 00:10:31.151 | 1.00th=[ 8356], 5.00th=[10028], 10.00th=[11469], 20.00th=[12125], 00:10:31.151 | 30.00th=[13173], 40.00th=[16712], 50.00th=[21103], 60.00th=[22414], 00:10:31.151 | 70.00th=[24511], 80.00th=[29230], 90.00th=[36963], 95.00th=[38011], 00:10:31.151 | 99.00th=[39060], 99.50th=[40633], 99.90th=[50594], 99.95th=[50594], 00:10:31.151 | 99.99th=[50594] 00:10:31.151 bw ( KiB/s): min=10776, max=16384, per=23.75%, avg=13580.00, stdev=3965.45, samples=2 00:10:31.151 iops : min= 2694, max= 4096, avg=3395.00, stdev=991.36, samples=2 00:10:31.151 lat (msec) : 10=4.35%, 20=52.50%, 50=43.08%, 100=0.06% 00:10:31.151 cpu : usr=2.49%, sys=6.26%, ctx=307, majf=0, minf=1 00:10:31.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:31.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.151 issued rwts: total=3072,3522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.151 job2: (groupid=0, jobs=1): err= 0: pid=156319: Tue Nov 19 02:51:41 2024 00:10:31.151 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:10:31.151 slat (usec): min=2, max=11289, avg=149.70, stdev=877.06 00:10:31.151 clat (usec): min=6174, max=45813, avg=19734.91, stdev=6340.46 00:10:31.151 lat (usec): min=6192, max=50908, avg=19884.61, stdev=6410.73 00:10:31.151 clat percentiles (usec): 00:10:31.151 | 1.00th=[10683], 5.00th=[12780], 10.00th=[13829], 20.00th=[14353], 00:10:31.151 | 30.00th=[14615], 40.00th=[15008], 50.00th=[19268], 60.00th=[19792], 00:10:31.151 | 70.00th=[22676], 80.00th=[25560], 90.00th=[26870], 95.00th=[31589], 00:10:31.151 | 99.00th=[38536], 99.50th=[44303], 99.90th=[45876], 99.95th=[45876], 00:10:31.151 | 99.99th=[45876] 00:10:31.151 write: IOPS=3181, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1008msec); 0 zone resets 00:10:31.151 slat (usec): min=3, max=12076, avg=148.32, stdev=864.09 00:10:31.151 clat (usec): min=380, max=72291, avg=20987.03, stdev=12938.13 00:10:31.151 lat (usec): min=396, max=72322, avg=21135.35, stdev=13029.04 00:10:31.151 clat percentiles (usec): 00:10:31.151 | 1.00th=[ 2311], 5.00th=[ 5932], 10.00th=[ 9110], 20.00th=[12911], 00:10:31.151 | 30.00th=[13698], 40.00th=[14353], 50.00th=[17433], 60.00th=[20841], 00:10:31.151 | 70.00th=[23200], 80.00th=[26870], 90.00th=[38536], 95.00th=[52691], 00:10:31.151 | 99.00th=[62653], 99.50th=[66847], 99.90th=[71828], 99.95th=[71828], 00:10:31.151 | 99.99th=[71828] 00:10:31.151 bw ( KiB/s): min= 9448, max=15248, per=21.60%, avg=12348.00, stdev=4101.22, samples=2 00:10:31.151 iops : min= 2362, max= 3812, avg=3087.00, stdev=1025.30, samples=2 00:10:31.152 lat (usec) : 500=0.03%, 1000=0.10% 00:10:31.152 lat (msec) : 2=0.16%, 4=0.96%, 10=5.62%, 20=51.25%, 50=38.54% 00:10:31.152 lat (msec) : 100=3.34% 00:10:31.152 cpu : usr=4.87%, sys=7.05%, ctx=262, majf=0, minf=1 00:10:31.152 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:31.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.152 issued rwts: total=3072,3207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.152 job3: (groupid=0, jobs=1): err= 0: pid=156320: Tue Nov 19 02:51:41 2024 00:10:31.152 read: IOPS=4259, BW=16.6MiB/s (17.4MB/s)(16.8MiB/1007msec) 00:10:31.152 slat (usec): min=2, max=8245, avg=120.88, stdev=637.11 00:10:31.152 clat (usec): min=4533, max=57290, avg=15738.06, stdev=8162.71 00:10:31.152 lat (usec): min=4913, max=65536, avg=15858.95, stdev=8207.97 00:10:31.152 clat percentiles (usec): 00:10:31.152 | 1.00th=[ 8094], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[11731], 00:10:31.152 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12387], 60.00th=[13566], 00:10:31.152 | 70.00th=[14746], 80.00th=[16909], 90.00th=[27919], 95.00th=[34866], 00:10:31.152 | 99.00th=[47973], 99.50th=[53740], 99.90th=[57410], 99.95th=[57410], 00:10:31.152 | 99.99th=[57410] 00:10:31.152 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:10:31.152 slat (usec): min=3, max=6937, avg=93.24, stdev=448.85 00:10:31.152 clat (usec): min=7402, max=20964, avg=12986.90, stdev=2044.98 00:10:31.152 lat (usec): min=7412, max=20973, avg=13080.14, stdev=2051.66 00:10:31.152 clat percentiles (usec): 00:10:31.152 | 1.00th=[ 9110], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[11469], 00:10:31.152 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:10:31.152 | 70.00th=[13829], 80.00th=[14353], 90.00th=[15795], 95.00th=[16450], 00:10:31.152 | 99.00th=[19268], 99.50th=[19530], 99.90th=[20055], 99.95th=[20317], 00:10:31.152 | 99.99th=[20841] 00:10:31.152 bw ( KiB/s): min=18160, max=18704, per=32.24%, avg=18432.00, stdev=384.67, samples=2 00:10:31.152 iops : min= 4540, max= 4676, avg=4608.00, stdev=96.17, samples=2 00:10:31.152 lat (msec) : 10=6.97%, 20=85.48%, 50=7.10%, 100=0.45% 00:10:31.152 cpu : usr=4.47%, sys=11.73%, ctx=473, majf=0, minf=1 00:10:31.152 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:31.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.152 issued rwts: total=4289,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.152 00:10:31.152 Run status group 0 (all jobs): 00:10:31.152 READ: bw=51.3MiB/s (53.8MB/s), 10.9MiB/s-16.6MiB/s (11.4MB/s-17.4MB/s), io=51.7MiB (54.2MB), run=1007-1008msec 00:10:31.152 WRITE: bw=55.8MiB/s (58.6MB/s), 11.9MiB/s-17.9MiB/s (12.5MB/s-18.7MB/s), io=56.3MiB (59.0MB), run=1007-1008msec 00:10:31.152 00:10:31.152 Disk stats (read/write): 00:10:31.152 nvme0n1: ios=2348/2560, merge=0/0, ticks=21581/31576, in_queue=53157, util=89.98% 00:10:31.152 nvme0n2: ios=2663/3072, merge=0/0, ticks=17993/23776, in_queue=41769, util=94.11% 00:10:31.152 nvme0n3: ios=2617/2655, merge=0/0, ticks=33209/38631, in_queue=71840, util=94.79% 00:10:31.152 nvme0n4: ios=3866/4096, merge=0/0, ticks=21524/16959, in_queue=38483, util=96.11% 00:10:31.152 02:51:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:31.152 02:51:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=156460 00:10:31.152 02:51:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:31.152 02:51:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:31.152 [global] 00:10:31.152 thread=1 00:10:31.152 invalidate=1 00:10:31.152 rw=read 00:10:31.152 time_based=1 00:10:31.152 runtime=10 00:10:31.152 ioengine=libaio 00:10:31.152 direct=1 00:10:31.152 bs=4096 00:10:31.152 iodepth=1 00:10:31.152 norandommap=1 00:10:31.152 numjobs=1 00:10:31.152 00:10:31.152 [job0] 00:10:31.152 filename=/dev/nvme0n1 00:10:31.152 [job1] 00:10:31.152 filename=/dev/nvme0n2 00:10:31.152 [job2] 00:10:31.152 filename=/dev/nvme0n3 00:10:31.152 [job3] 00:10:31.152 filename=/dev/nvme0n4 00:10:31.152 Could not set queue depth (nvme0n1) 00:10:31.152 Could not set queue depth (nvme0n2) 00:10:31.152 Could not set queue depth (nvme0n3) 00:10:31.152 Could not set queue depth (nvme0n4) 00:10:31.152 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.152 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.152 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.152 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.152 fio-3.35 00:10:31.152 Starting 4 threads 00:10:34.430 02:51:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:34.430 02:51:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:34.430 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=13443072, buflen=4096 00:10:34.430 fio: pid=156559, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.688 02:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.688 02:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:34.688 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=49197056, buflen=4096 00:10:34.688 fio: pid=156558, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.945 02:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.945 02:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:34.945 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=7643136, buflen=4096 00:10:34.945 fio: pid=156555, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:35.203 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=2527232, buflen=4096 00:10:35.203 fio: pid=156557, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:35.203 02:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.203 02:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:35.203 00:10:35.203 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156555: Tue Nov 19 02:51:45 2024 00:10:35.203 read: IOPS=529, BW=2116KiB/s (2166kB/s)(7464KiB/3528msec) 00:10:35.203 slat (usec): min=5, max=11900, avg=16.50, stdev=275.24 00:10:35.203 clat (usec): min=203, max=41265, avg=1858.57, stdev=7837.08 00:10:35.203 lat (usec): min=212, max=53006, avg=1875.06, stdev=7876.01 00:10:35.203 clat percentiles (usec): 00:10:35.203 | 1.00th=[ 215], 5.00th=[ 262], 10.00th=[ 265], 20.00th=[ 269], 00:10:35.203 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:10:35.203 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 338], 95.00th=[ 383], 00:10:35.203 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:35.203 | 99.99th=[41157] 00:10:35.203 bw ( KiB/s): min= 96, max= 7880, per=13.10%, avg=2472.00, stdev=3624.99, samples=6 00:10:35.203 iops : min= 24, max= 1970, avg=618.00, stdev=906.25, samples=6 00:10:35.203 lat (usec) : 250=3.54%, 500=92.29%, 750=0.27% 00:10:35.203 lat (msec) : 50=3.86% 00:10:35.203 cpu : usr=0.43%, sys=0.77%, ctx=1869, majf=0, minf=1 00:10:35.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.203 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.203 issued rwts: total=1867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.203 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156557: Tue Nov 19 02:51:45 2024 00:10:35.203 read: IOPS=163, BW=655KiB/s (671kB/s)(2468KiB/3769msec) 00:10:35.203 slat (usec): min=6, max=34934, avg=101.50, stdev=1561.20 00:10:35.203 clat (usec): min=201, max=41274, avg=5964.54, stdev=14078.28 00:10:35.203 lat (usec): min=209, max=76005, avg=6066.18, stdev=14302.77 00:10:35.203 clat percentiles (usec): 00:10:35.203 | 1.00th=[ 208], 5.00th=[ 239], 10.00th=[ 251], 20.00th=[ 273], 00:10:35.203 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 306], 00:10:35.203 | 70.00th=[ 314], 80.00th=[ 338], 90.00th=[41157], 95.00th=[41157], 00:10:35.203 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:35.203 | 99.99th=[41157] 00:10:35.203 bw ( KiB/s): min= 100, max= 3368, per=3.69%, avg=697.71, stdev=1190.75, samples=7 00:10:35.203 iops : min= 25, max= 842, avg=174.43, stdev=297.69, samples=7 00:10:35.203 lat (usec) : 250=9.71%, 500=75.24%, 750=0.81% 00:10:35.203 lat (msec) : 10=0.16%, 50=13.92% 00:10:35.204 cpu : usr=0.05%, sys=0.48%, ctx=621, majf=0, minf=2 00:10:35.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.204 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.204 issued rwts: total=618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.204 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156558: Tue Nov 19 02:51:45 2024 00:10:35.204 read: IOPS=3725, BW=14.6MiB/s (15.3MB/s)(46.9MiB/3224msec) 00:10:35.204 slat (usec): min=5, max=7772, avg=13.78, stdev=96.48 00:10:35.204 clat (usec): min=179, max=456, avg=249.58, stdev=35.57 00:10:35.204 lat (usec): min=186, max=8020, avg=263.35, stdev=103.60 00:10:35.204 clat percentiles (usec): 00:10:35.204 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 212], 00:10:35.204 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 249], 60.00th=[ 260], 00:10:35.204 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 310], 00:10:35.204 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 351], 99.95th=[ 367], 00:10:35.204 | 99.99th=[ 449] 00:10:35.204 bw ( KiB/s): min=13864, max=16744, per=79.13%, avg=14928.00, stdev=1114.17, samples=6 00:10:35.204 iops : min= 3466, max= 4186, avg=3732.00, stdev=278.54, samples=6 00:10:35.204 lat (usec) : 250=51.30%, 500=48.69% 00:10:35.204 cpu : usr=2.67%, sys=6.95%, ctx=12016, majf=0, minf=1 00:10:35.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.204 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.204 issued rwts: total=12012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.204 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156559: Tue Nov 19 02:51:45 2024 00:10:35.204 read: IOPS=1117, BW=4467KiB/s (4574kB/s)(12.8MiB/2939msec) 00:10:35.204 slat (nsec): min=4451, max=66690, avg=11169.48, stdev=6710.12 00:10:35.204 clat (usec): min=185, max=41353, avg=872.86, stdev=5034.32 00:10:35.204 lat (usec): min=190, max=41374, avg=884.03, stdev=5035.60 00:10:35.204 clat percentiles (usec): 00:10:35.204 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 217], 00:10:35.204 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:10:35.204 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 281], 95.00th=[ 326], 00:10:35.204 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:35.204 | 99.99th=[41157] 00:10:35.204 bw ( KiB/s): min= 112, max= 8040, per=15.79%, avg=2979.20, stdev=3160.94, samples=5 00:10:35.204 iops : min= 28, max= 2010, avg=744.80, stdev=790.24, samples=5 00:10:35.204 lat (usec) : 250=75.63%, 500=22.36%, 750=0.43% 00:10:35.204 lat (msec) : 50=1.55% 00:10:35.204 cpu : usr=0.68%, sys=1.40%, ctx=3283, majf=0, minf=1 00:10:35.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.204 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.204 issued rwts: total=3283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.204 00:10:35.204 Run status group 0 (all jobs): 00:10:35.204 READ: bw=18.4MiB/s (19.3MB/s), 655KiB/s-14.6MiB/s (671kB/s-15.3MB/s), io=69.4MiB (72.8MB), run=2939-3769msec 00:10:35.204 00:10:35.204 Disk stats (read/write): 00:10:35.204 nvme0n1: ios=1862/0, merge=0/0, ticks=3285/0, in_queue=3285, util=95.85% 00:10:35.204 nvme0n2: ios=613/0, merge=0/0, ticks=3517/0, in_queue=3517, util=95.10% 00:10:35.204 nvme0n3: ios=11593/0, merge=0/0, ticks=2923/0, in_queue=2923, util=99.19% 00:10:35.204 nvme0n4: ios=3010/0, merge=0/0, ticks=2774/0, in_queue=2774, util=96.74% 00:10:35.462 02:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.462 02:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:35.721 02:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.721 02:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:35.979 02:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.979 02:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:36.238 02:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.238 02:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:36.496 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:36.496 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 156460 00:10:36.496 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:36.496 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:36.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.754 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:36.754 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:36.754 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:36.754 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.754 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:36.754 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.754 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:36.754 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:36.754 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:36.754 nvmf hotplug test: fio failed as expected 00:10:36.754 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.012 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:37.012 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:37.012 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:37.012 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:37.013 rmmod nvme_tcp 00:10:37.013 rmmod nvme_fabrics 00:10:37.013 rmmod nvme_keyring 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 154419 ']' 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 154419 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 154419 ']' 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 154419 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 154419 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 154419' 00:10:37.013 killing process with pid 154419 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 154419 00:10:37.013 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 154419 00:10:37.273 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:37.273 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:37.273 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:37.273 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:37.273 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:37.273 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:37.273 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:37.273 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:37.273 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:37.273 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.273 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.273 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:39.814 00:10:39.814 real 0m24.166s 00:10:39.814 user 1m24.846s 00:10:39.814 sys 0m7.447s 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.814 ************************************ 00:10:39.814 END TEST nvmf_fio_target 00:10:39.814 ************************************ 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:39.814 ************************************ 00:10:39.814 START TEST nvmf_bdevio 00:10:39.814 ************************************ 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:39.814 * Looking for test storage... 00:10:39.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.814 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.814 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:39.814 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:39.814 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.814 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:39.814 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.814 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:39.814 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:39.814 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.814 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:39.814 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.814 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.814 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.814 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:39.814 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.814 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:39.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.814 --rc genhtml_branch_coverage=1 00:10:39.814 --rc genhtml_function_coverage=1 00:10:39.814 --rc genhtml_legend=1 00:10:39.814 --rc geninfo_all_blocks=1 00:10:39.814 --rc geninfo_unexecuted_blocks=1 00:10:39.814 00:10:39.814 ' 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:39.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.815 --rc genhtml_branch_coverage=1 00:10:39.815 --rc genhtml_function_coverage=1 00:10:39.815 --rc genhtml_legend=1 00:10:39.815 --rc geninfo_all_blocks=1 00:10:39.815 --rc geninfo_unexecuted_blocks=1 00:10:39.815 00:10:39.815 ' 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:39.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.815 --rc genhtml_branch_coverage=1 00:10:39.815 --rc genhtml_function_coverage=1 00:10:39.815 --rc genhtml_legend=1 00:10:39.815 --rc geninfo_all_blocks=1 00:10:39.815 --rc geninfo_unexecuted_blocks=1 00:10:39.815 00:10:39.815 ' 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:39.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.815 --rc genhtml_branch_coverage=1 00:10:39.815 --rc genhtml_function_coverage=1 00:10:39.815 --rc genhtml_legend=1 00:10:39.815 --rc geninfo_all_blocks=1 00:10:39.815 --rc geninfo_unexecuted_blocks=1 00:10:39.815 00:10:39.815 ' 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:39.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:39.815 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.718 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.718 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:41.718 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:41.718 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:41.718 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:41.718 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:41.718 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:41.718 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:41.718 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:41.719 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:41.719 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:41.719 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:41.719 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:41.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:10:41.719 00:10:41.719 --- 10.0.0.2 ping statistics --- 00:10:41.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.719 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:41.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:10:41.719 00:10:41.719 --- 10.0.0.1 ping statistics --- 00:10:41.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.719 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:41.719 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:41.720 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:41.720 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.720 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=159193 00:10:41.720 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:41.720 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 159193 00:10:41.720 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 159193 ']' 00:10:41.720 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.720 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.720 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.720 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.720 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.978 [2024-11-19 02:51:52.369311] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:10:41.978 [2024-11-19 02:51:52.369395] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.978 [2024-11-19 02:51:52.443216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.978 [2024-11-19 02:51:52.492084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.978 [2024-11-19 02:51:52.492147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.978 [2024-11-19 02:51:52.492161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.978 [2024-11-19 02:51:52.492172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.978 [2024-11-19 02:51:52.492180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.978 [2024-11-19 02:51:52.493814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:41.978 [2024-11-19 02:51:52.493877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:41.978 [2024-11-19 02:51:52.493943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:41.978 [2024-11-19 02:51:52.493945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.237 [2024-11-19 02:51:52.636618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.237 Malloc0 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.237 [2024-11-19 02:51:52.699434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:42.237 { 00:10:42.237 "params": { 00:10:42.237 "name": "Nvme$subsystem", 00:10:42.237 "trtype": "$TEST_TRANSPORT", 00:10:42.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:42.237 "adrfam": "ipv4", 00:10:42.237 "trsvcid": "$NVMF_PORT", 00:10:42.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:42.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:42.237 "hdgst": ${hdgst:-false}, 00:10:42.237 "ddgst": ${ddgst:-false} 00:10:42.237 }, 00:10:42.237 "method": "bdev_nvme_attach_controller" 00:10:42.237 } 00:10:42.237 EOF 00:10:42.237 )") 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:42.237 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:42.237 "params": { 00:10:42.237 "name": "Nvme1", 00:10:42.237 "trtype": "tcp", 00:10:42.237 "traddr": "10.0.0.2", 00:10:42.237 "adrfam": "ipv4", 00:10:42.237 "trsvcid": "4420", 00:10:42.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:42.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:42.237 "hdgst": false, 00:10:42.237 "ddgst": false 00:10:42.237 }, 00:10:42.237 "method": "bdev_nvme_attach_controller" 00:10:42.237 }' 00:10:42.237 [2024-11-19 02:51:52.749164] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:10:42.237 [2024-11-19 02:51:52.749243] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159338 ] 00:10:42.237 [2024-11-19 02:51:52.818781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:42.495 [2024-11-19 02:51:52.870743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.495 [2024-11-19 02:51:52.870769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.495 [2024-11-19 02:51:52.870773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.495 I/O targets: 00:10:42.495 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:42.495 00:10:42.495 00:10:42.495 CUnit - A unit testing framework for C - Version 2.1-3 00:10:42.495 http://cunit.sourceforge.net/ 00:10:42.495 00:10:42.495 00:10:42.495 Suite: bdevio tests on: Nvme1n1 00:10:42.495 Test: blockdev write read block ...passed 00:10:42.753 Test: blockdev write zeroes read block ...passed 00:10:42.753 Test: blockdev write zeroes read no split ...passed 00:10:42.753 Test: blockdev write zeroes read split ...passed 00:10:42.753 Test: blockdev write zeroes read split partial ...passed 00:10:42.753 Test: blockdev reset ...[2024-11-19 02:51:53.157186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:42.753 [2024-11-19 02:51:53.157306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e4b70 (9): Bad file descriptor 00:10:42.753 [2024-11-19 02:51:53.301288] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:42.753 passed 00:10:42.753 Test: blockdev write read 8 blocks ...passed 00:10:42.753 Test: blockdev write read size > 128k ...passed 00:10:42.753 Test: blockdev write read invalid size ...passed 00:10:42.753 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:42.753 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:42.753 Test: blockdev write read max offset ...passed 00:10:43.011 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:43.011 Test: blockdev writev readv 8 blocks ...passed 00:10:43.011 Test: blockdev writev readv 30 x 1block ...passed 00:10:43.011 Test: blockdev writev readv block ...passed 00:10:43.011 Test: blockdev writev readv size > 128k ...passed 00:10:43.011 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:43.011 Test: blockdev comparev and writev ...[2024-11-19 02:51:53.594746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.011 [2024-11-19 02:51:53.594784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:43.011 [2024-11-19 02:51:53.594808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.011 [2024-11-19 02:51:53.594826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:43.011 [2024-11-19 02:51:53.595136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.011 [2024-11-19 02:51:53.595161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:43.011 [2024-11-19 02:51:53.595183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.011 [2024-11-19 02:51:53.595200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:43.011 [2024-11-19 02:51:53.595526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.011 [2024-11-19 02:51:53.595551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:43.011 [2024-11-19 02:51:53.595574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.011 [2024-11-19 02:51:53.595592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:43.011 [2024-11-19 02:51:53.595936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.011 [2024-11-19 02:51:53.595961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:43.011 [2024-11-19 02:51:53.595983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.011 [2024-11-19 02:51:53.596000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:43.270 passed 00:10:43.270 Test: blockdev nvme passthru rw ...passed 00:10:43.270 Test: blockdev nvme passthru vendor specific ...[2024-11-19 02:51:53.677952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:43.270 [2024-11-19 02:51:53.677979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:43.270 [2024-11-19 02:51:53.678122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:43.270 [2024-11-19 02:51:53.678150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:43.270 [2024-11-19 02:51:53.678284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:43.270 [2024-11-19 02:51:53.678308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:43.270 [2024-11-19 02:51:53.678450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:43.270 [2024-11-19 02:51:53.678474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:43.270 passed 00:10:43.270 Test: blockdev nvme admin passthru ...passed 00:10:43.270 Test: blockdev copy ...passed 00:10:43.270 00:10:43.270 Run Summary: Type Total Ran Passed Failed Inactive 00:10:43.270 suites 1 1 n/a 0 0 00:10:43.270 tests 23 23 23 0 0 00:10:43.270 asserts 152 152 152 0 n/a 00:10:43.270 00:10:43.270 Elapsed time = 1.380 seconds 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:43.528 rmmod nvme_tcp 00:10:43.528 rmmod nvme_fabrics 00:10:43.528 rmmod nvme_keyring 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 159193 ']' 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 159193 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 159193 ']' 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 159193 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 159193 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 159193' 00:10:43.528 killing process with pid 159193 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 159193 00:10:43.528 02:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 159193 00:10:43.801 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:43.801 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:43.801 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:43.801 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:43.801 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:43.801 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:43.801 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:43.801 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:43.801 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:43.801 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.801 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.801 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.708 02:51:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:45.708 00:10:45.708 real 0m6.384s 00:10:45.708 user 0m9.944s 00:10:45.708 sys 0m2.150s 00:10:45.708 02:51:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.708 02:51:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.708 ************************************ 00:10:45.708 END TEST nvmf_bdevio 00:10:45.708 ************************************ 00:10:45.708 02:51:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:45.708 00:10:45.708 real 3m55.444s 00:10:45.708 user 10m14.334s 00:10:45.708 sys 1m7.358s 00:10:45.708 02:51:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.708 02:51:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:45.708 ************************************ 00:10:45.708 END TEST nvmf_target_core 00:10:45.708 ************************************ 00:10:45.708 02:51:56 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:45.708 02:51:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:45.708 02:51:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.708 02:51:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:45.967 ************************************ 00:10:45.967 START TEST nvmf_target_extra 00:10:45.967 ************************************ 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:45.967 * Looking for test storage... 00:10:45.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:45.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.967 --rc genhtml_branch_coverage=1 00:10:45.967 --rc genhtml_function_coverage=1 00:10:45.967 --rc genhtml_legend=1 00:10:45.967 --rc geninfo_all_blocks=1 00:10:45.967 --rc geninfo_unexecuted_blocks=1 00:10:45.967 00:10:45.967 ' 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:45.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.967 --rc genhtml_branch_coverage=1 00:10:45.967 --rc genhtml_function_coverage=1 00:10:45.967 --rc genhtml_legend=1 00:10:45.967 --rc geninfo_all_blocks=1 00:10:45.967 --rc geninfo_unexecuted_blocks=1 00:10:45.967 00:10:45.967 ' 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:45.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.967 --rc genhtml_branch_coverage=1 00:10:45.967 --rc genhtml_function_coverage=1 00:10:45.967 --rc genhtml_legend=1 00:10:45.967 --rc geninfo_all_blocks=1 00:10:45.967 --rc geninfo_unexecuted_blocks=1 00:10:45.967 00:10:45.967 ' 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:45.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.967 --rc genhtml_branch_coverage=1 00:10:45.967 --rc genhtml_function_coverage=1 00:10:45.967 --rc genhtml_legend=1 00:10:45.967 --rc geninfo_all_blocks=1 00:10:45.967 --rc geninfo_unexecuted_blocks=1 00:10:45.967 00:10:45.967 ' 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.967 02:51:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:45.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:45.968 ************************************ 00:10:45.968 START TEST nvmf_example 00:10:45.968 ************************************ 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:45.968 * Looking for test storage... 00:10:45.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:45.968 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:46.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.227 --rc genhtml_branch_coverage=1 00:10:46.227 --rc genhtml_function_coverage=1 00:10:46.227 --rc genhtml_legend=1 00:10:46.227 --rc geninfo_all_blocks=1 00:10:46.227 --rc geninfo_unexecuted_blocks=1 00:10:46.227 00:10:46.227 ' 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:46.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.227 --rc genhtml_branch_coverage=1 00:10:46.227 --rc genhtml_function_coverage=1 00:10:46.227 --rc genhtml_legend=1 00:10:46.227 --rc geninfo_all_blocks=1 00:10:46.227 --rc geninfo_unexecuted_blocks=1 00:10:46.227 00:10:46.227 ' 00:10:46.227 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:46.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.228 --rc genhtml_branch_coverage=1 00:10:46.228 --rc genhtml_function_coverage=1 00:10:46.228 --rc genhtml_legend=1 00:10:46.228 --rc geninfo_all_blocks=1 00:10:46.228 --rc geninfo_unexecuted_blocks=1 00:10:46.228 00:10:46.228 ' 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:46.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.228 --rc genhtml_branch_coverage=1 00:10:46.228 --rc genhtml_function_coverage=1 00:10:46.228 --rc genhtml_legend=1 00:10:46.228 --rc geninfo_all_blocks=1 00:10:46.228 --rc geninfo_unexecuted_blocks=1 00:10:46.228 00:10:46.228 ' 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.228 02:51:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.769 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.769 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:48.769 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:48.769 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:48.769 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:48.769 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:48.769 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:48.769 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:48.769 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:48.769 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:48.769 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:48.769 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:48.769 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:48.769 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:48.769 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:48.769 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:48.770 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:48.770 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.770 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:48.771 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:48.771 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.771 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:48.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:10:48.772 00:10:48.772 --- 10.0.0.2 ping statistics --- 00:10:48.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.772 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:10:48.772 00:10:48.772 --- 10.0.0.1 ping statistics --- 00:10:48.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.772 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=161482 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 161482 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 161482 ']' 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.772 02:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.772 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.772 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:48.772 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:48.772 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:48.772 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.772 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:48.773 02:51:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:00.983 Initializing NVMe Controllers 00:11:00.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:00.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:00.983 Initialization complete. Launching workers. 00:11:00.983 ======================================================== 00:11:00.983 Latency(us) 00:11:00.983 Device Information : IOPS MiB/s Average min max 00:11:00.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14869.41 58.08 4303.58 672.33 20130.06 00:11:00.983 ======================================================== 00:11:00.983 Total : 14869.41 58.08 4303.58 672.33 20130.06 00:11:00.983 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:00.983 rmmod nvme_tcp 00:11:00.983 rmmod nvme_fabrics 00:11:00.983 rmmod nvme_keyring 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 161482 ']' 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 161482 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 161482 ']' 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 161482 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 161482 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 161482' 00:11:00.983 killing process with pid 161482 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 161482 00:11:00.983 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 161482 00:11:00.983 nvmf threads initialize successfully 00:11:00.983 bdev subsystem init successfully 00:11:00.984 created a nvmf target service 00:11:00.984 create targets's poll groups done 00:11:00.984 all subsystems of target started 00:11:00.984 nvmf target is running 00:11:00.984 all subsystems of target stopped 00:11:00.984 destroy targets's poll groups done 00:11:00.984 destroyed the nvmf target service 00:11:00.984 bdev subsystem finish successfully 00:11:00.984 nvmf threads destroy successfully 00:11:00.984 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:00.984 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:00.984 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:00.984 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:00.984 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:00.984 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:00.984 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:00.984 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:00.984 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:00.984 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.984 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.984 02:52:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.555 02:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:01.555 02:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:01.555 02:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.555 02:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.555 00:11:01.555 real 0m15.473s 00:11:01.555 user 0m42.300s 00:11:01.555 sys 0m3.500s 00:11:01.555 02:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.555 02:52:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.555 ************************************ 00:11:01.555 END TEST nvmf_example 00:11:01.555 ************************************ 00:11:01.555 02:52:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:01.555 02:52:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:01.555 02:52:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.555 02:52:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:01.555 ************************************ 00:11:01.555 START TEST nvmf_filesystem 00:11:01.555 ************************************ 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:01.555 * Looking for test storage... 00:11:01.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:01.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.555 --rc genhtml_branch_coverage=1 00:11:01.555 --rc genhtml_function_coverage=1 00:11:01.555 --rc genhtml_legend=1 00:11:01.555 --rc geninfo_all_blocks=1 00:11:01.555 --rc geninfo_unexecuted_blocks=1 00:11:01.555 00:11:01.555 ' 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:01.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.555 --rc genhtml_branch_coverage=1 00:11:01.555 --rc genhtml_function_coverage=1 00:11:01.555 --rc genhtml_legend=1 00:11:01.555 --rc geninfo_all_blocks=1 00:11:01.555 --rc geninfo_unexecuted_blocks=1 00:11:01.555 00:11:01.555 ' 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:01.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.555 --rc genhtml_branch_coverage=1 00:11:01.555 --rc genhtml_function_coverage=1 00:11:01.555 --rc genhtml_legend=1 00:11:01.555 --rc geninfo_all_blocks=1 00:11:01.555 --rc geninfo_unexecuted_blocks=1 00:11:01.555 00:11:01.555 ' 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:01.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.555 --rc genhtml_branch_coverage=1 00:11:01.555 --rc genhtml_function_coverage=1 00:11:01.555 --rc genhtml_legend=1 00:11:01.555 --rc geninfo_all_blocks=1 00:11:01.555 --rc geninfo_unexecuted_blocks=1 00:11:01.555 00:11:01.555 ' 00:11:01.555 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:01.556 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:01.557 #define SPDK_CONFIG_H 00:11:01.557 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:01.557 #define SPDK_CONFIG_APPS 1 00:11:01.557 #define SPDK_CONFIG_ARCH native 00:11:01.557 #undef SPDK_CONFIG_ASAN 00:11:01.557 #undef SPDK_CONFIG_AVAHI 00:11:01.557 #undef SPDK_CONFIG_CET 00:11:01.557 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:01.557 #define SPDK_CONFIG_COVERAGE 1 00:11:01.557 #define SPDK_CONFIG_CROSS_PREFIX 00:11:01.557 #undef SPDK_CONFIG_CRYPTO 00:11:01.557 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:01.557 #undef SPDK_CONFIG_CUSTOMOCF 00:11:01.557 #undef SPDK_CONFIG_DAOS 00:11:01.557 #define SPDK_CONFIG_DAOS_DIR 00:11:01.557 #define SPDK_CONFIG_DEBUG 1 00:11:01.557 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:01.557 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:01.557 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:01.557 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:01.557 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:01.557 #undef SPDK_CONFIG_DPDK_UADK 00:11:01.557 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:01.557 #define SPDK_CONFIG_EXAMPLES 1 00:11:01.557 #undef SPDK_CONFIG_FC 00:11:01.557 #define SPDK_CONFIG_FC_PATH 00:11:01.557 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:01.557 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:01.557 #define SPDK_CONFIG_FSDEV 1 00:11:01.557 #undef SPDK_CONFIG_FUSE 00:11:01.557 #undef SPDK_CONFIG_FUZZER 00:11:01.557 #define SPDK_CONFIG_FUZZER_LIB 00:11:01.557 #undef SPDK_CONFIG_GOLANG 00:11:01.557 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:01.557 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:01.557 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:01.557 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:01.557 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:01.557 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:01.557 #undef SPDK_CONFIG_HAVE_LZ4 00:11:01.557 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:01.557 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:01.557 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:01.557 #define SPDK_CONFIG_IDXD 1 00:11:01.557 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:01.557 #undef SPDK_CONFIG_IPSEC_MB 00:11:01.557 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:01.557 #define SPDK_CONFIG_ISAL 1 00:11:01.557 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:01.557 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:01.557 #define SPDK_CONFIG_LIBDIR 00:11:01.557 #undef SPDK_CONFIG_LTO 00:11:01.557 #define SPDK_CONFIG_MAX_LCORES 128 00:11:01.557 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:01.557 #define SPDK_CONFIG_NVME_CUSE 1 00:11:01.557 #undef SPDK_CONFIG_OCF 00:11:01.557 #define SPDK_CONFIG_OCF_PATH 00:11:01.557 #define SPDK_CONFIG_OPENSSL_PATH 00:11:01.557 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:01.557 #define SPDK_CONFIG_PGO_DIR 00:11:01.557 #undef SPDK_CONFIG_PGO_USE 00:11:01.557 #define SPDK_CONFIG_PREFIX /usr/local 00:11:01.557 #undef SPDK_CONFIG_RAID5F 00:11:01.557 #undef SPDK_CONFIG_RBD 00:11:01.557 #define SPDK_CONFIG_RDMA 1 00:11:01.557 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:01.557 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:01.557 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:01.557 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:01.557 #define SPDK_CONFIG_SHARED 1 00:11:01.557 #undef SPDK_CONFIG_SMA 00:11:01.557 #define SPDK_CONFIG_TESTS 1 00:11:01.557 #undef SPDK_CONFIG_TSAN 00:11:01.557 #define SPDK_CONFIG_UBLK 1 00:11:01.557 #define SPDK_CONFIG_UBSAN 1 00:11:01.557 #undef SPDK_CONFIG_UNIT_TESTS 00:11:01.557 #undef SPDK_CONFIG_URING 00:11:01.557 #define SPDK_CONFIG_URING_PATH 00:11:01.557 #undef SPDK_CONFIG_URING_ZNS 00:11:01.557 #undef SPDK_CONFIG_USDT 00:11:01.557 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:01.557 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:01.557 #define SPDK_CONFIG_VFIO_USER 1 00:11:01.557 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:01.557 #define SPDK_CONFIG_VHOST 1 00:11:01.557 #define SPDK_CONFIG_VIRTIO 1 00:11:01.557 #undef SPDK_CONFIG_VTUNE 00:11:01.557 #define SPDK_CONFIG_VTUNE_DIR 00:11:01.557 #define SPDK_CONFIG_WERROR 1 00:11:01.557 #define SPDK_CONFIG_WPDK_DIR 00:11:01.557 #undef SPDK_CONFIG_XNVME 00:11:01.557 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.557 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:01.819 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:01.820 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 163144 ]] 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 163144 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.HKgXdI 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.HKgXdI/tests/target /tmp/spdk.HKgXdI 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=54521475072 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988519936 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7467044864 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30984228864 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375277568 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22429696 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30994083840 00:11:01.821 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=176128 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:01.822 * Looking for test storage... 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=54521475072 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9681637376 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:01.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.822 --rc genhtml_branch_coverage=1 00:11:01.822 --rc genhtml_function_coverage=1 00:11:01.822 --rc genhtml_legend=1 00:11:01.822 --rc geninfo_all_blocks=1 00:11:01.822 --rc geninfo_unexecuted_blocks=1 00:11:01.822 00:11:01.822 ' 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:01.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.822 --rc genhtml_branch_coverage=1 00:11:01.822 --rc genhtml_function_coverage=1 00:11:01.822 --rc genhtml_legend=1 00:11:01.822 --rc geninfo_all_blocks=1 00:11:01.822 --rc geninfo_unexecuted_blocks=1 00:11:01.822 00:11:01.822 ' 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:01.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.822 --rc genhtml_branch_coverage=1 00:11:01.822 --rc genhtml_function_coverage=1 00:11:01.822 --rc genhtml_legend=1 00:11:01.822 --rc geninfo_all_blocks=1 00:11:01.822 --rc geninfo_unexecuted_blocks=1 00:11:01.822 00:11:01.822 ' 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:01.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.822 --rc genhtml_branch_coverage=1 00:11:01.822 --rc genhtml_function_coverage=1 00:11:01.822 --rc genhtml_legend=1 00:11:01.822 --rc geninfo_all_blocks=1 00:11:01.822 --rc geninfo_unexecuted_blocks=1 00:11:01.822 00:11:01.822 ' 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.822 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:01.823 02:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.356 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:04.357 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:04.357 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:04.357 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:04.357 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:04.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:11:04.357 00:11:04.357 --- 10.0.0.2 ping statistics --- 00:11:04.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.357 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:11:04.357 00:11:04.357 --- 10.0.0.1 ping statistics --- 00:11:04.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.357 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:04.357 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.358 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.358 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:04.358 ************************************ 00:11:04.358 START TEST nvmf_filesystem_no_in_capsule 00:11:04.358 ************************************ 00:11:04.358 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:04.358 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:04.358 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:04.358 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:04.358 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:04.358 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.358 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=164820 00:11:04.358 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.358 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 164820 00:11:04.358 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 164820 ']' 00:11:04.358 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.358 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.358 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.358 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.358 02:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.358 [2024-11-19 02:52:14.848133] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:11:04.358 [2024-11-19 02:52:14.848227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.358 [2024-11-19 02:52:14.921868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.358 [2024-11-19 02:52:14.971529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.358 [2024-11-19 02:52:14.971590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.358 [2024-11-19 02:52:14.971605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.358 [2024-11-19 02:52:14.971620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.358 [2024-11-19 02:52:14.971640] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.358 [2024-11-19 02:52:14.973470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.358 [2024-11-19 02:52:14.973535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.358 [2024-11-19 02:52:14.973585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.358 [2024-11-19 02:52:14.973588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.616 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.616 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:04.616 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:04.616 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:04.616 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.616 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.616 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:04.616 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:04.616 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.616 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.616 [2024-11-19 02:52:15.123494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.616 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.616 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:04.616 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.616 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.874 Malloc1 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.874 [2024-11-19 02:52:15.319947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.874 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:04.874 { 00:11:04.874 "name": "Malloc1", 00:11:04.874 "aliases": [ 00:11:04.874 "ef672d0b-5317-4331-be66-47c782344cb5" 00:11:04.874 ], 00:11:04.874 "product_name": "Malloc disk", 00:11:04.874 "block_size": 512, 00:11:04.874 "num_blocks": 1048576, 00:11:04.874 "uuid": "ef672d0b-5317-4331-be66-47c782344cb5", 00:11:04.874 "assigned_rate_limits": { 00:11:04.874 "rw_ios_per_sec": 0, 00:11:04.874 "rw_mbytes_per_sec": 0, 00:11:04.874 "r_mbytes_per_sec": 0, 00:11:04.874 "w_mbytes_per_sec": 0 00:11:04.874 }, 00:11:04.874 "claimed": true, 00:11:04.874 "claim_type": "exclusive_write", 00:11:04.874 "zoned": false, 00:11:04.874 "supported_io_types": { 00:11:04.874 "read": true, 00:11:04.874 "write": true, 00:11:04.875 "unmap": true, 00:11:04.875 "flush": true, 00:11:04.875 "reset": true, 00:11:04.875 "nvme_admin": false, 00:11:04.875 "nvme_io": false, 00:11:04.875 "nvme_io_md": false, 00:11:04.875 "write_zeroes": true, 00:11:04.875 "zcopy": true, 00:11:04.875 "get_zone_info": false, 00:11:04.875 "zone_management": false, 00:11:04.875 "zone_append": false, 00:11:04.875 "compare": false, 00:11:04.875 "compare_and_write": false, 00:11:04.875 "abort": true, 00:11:04.875 "seek_hole": false, 00:11:04.875 "seek_data": false, 00:11:04.875 "copy": true, 00:11:04.875 "nvme_iov_md": false 00:11:04.875 }, 00:11:04.875 "memory_domains": [ 00:11:04.875 { 00:11:04.875 "dma_device_id": "system", 00:11:04.875 "dma_device_type": 1 00:11:04.875 }, 00:11:04.875 { 00:11:04.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.875 "dma_device_type": 2 00:11:04.875 } 00:11:04.875 ], 00:11:04.875 "driver_specific": {} 00:11:04.875 } 00:11:04.875 ]' 00:11:04.875 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:04.875 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:04.875 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:04.875 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:04.875 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:04.875 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:04.875 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:04.875 02:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.808 02:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.808 02:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:05.808 02:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.808 02:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:05.808 02:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:07.705 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:07.705 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:07.705 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.705 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:07.705 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.705 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:07.705 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:07.705 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:07.705 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:07.705 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:07.705 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:07.705 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:07.705 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:07.705 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:07.706 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:07.706 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:07.706 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:07.963 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:08.220 02:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:09.152 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:09.152 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:09.152 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:09.152 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.152 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.410 ************************************ 00:11:09.410 START TEST filesystem_ext4 00:11:09.410 ************************************ 00:11:09.410 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:09.410 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:09.410 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:09.410 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:09.410 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:09.410 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:09.410 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:09.410 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:09.410 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:09.410 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:09.410 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:09.410 mke2fs 1.47.0 (5-Feb-2023) 00:11:09.410 Discarding device blocks: 0/522240 done 00:11:09.410 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:09.410 Filesystem UUID: 54e81927-ff04-47de-b4d2-e769f290d67e 00:11:09.410 Superblock backups stored on blocks: 00:11:09.410 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:09.410 00:11:09.411 Allocating group tables: 0/64 done 00:11:09.411 Writing inode tables: 0/64 done 00:11:12.696 Creating journal (8192 blocks): done 00:11:14.558 Writing superblocks and filesystem accounting information: 0/64 done 00:11:14.558 00:11:14.558 02:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:14.558 02:52:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:19.820 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:19.820 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:19.820 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:19.820 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:19.820 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 164820 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:19.821 00:11:19.821 real 0m10.593s 00:11:19.821 user 0m0.017s 00:11:19.821 sys 0m0.097s 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:19.821 ************************************ 00:11:19.821 END TEST filesystem_ext4 00:11:19.821 ************************************ 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.821 ************************************ 00:11:19.821 START TEST filesystem_btrfs 00:11:19.821 ************************************ 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:19.821 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:20.386 btrfs-progs v6.8.1 00:11:20.386 See https://btrfs.readthedocs.io for more information. 00:11:20.386 00:11:20.386 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:20.386 NOTE: several default settings have changed in version 5.15, please make sure 00:11:20.386 this does not affect your deployments: 00:11:20.386 - DUP for metadata (-m dup) 00:11:20.386 - enabled no-holes (-O no-holes) 00:11:20.386 - enabled free-space-tree (-R free-space-tree) 00:11:20.386 00:11:20.386 Label: (null) 00:11:20.386 UUID: 9e54e6df-3291-414d-89dd-af4591cf6e37 00:11:20.386 Node size: 16384 00:11:20.386 Sector size: 4096 (CPU page size: 4096) 00:11:20.386 Filesystem size: 510.00MiB 00:11:20.386 Block group profiles: 00:11:20.386 Data: single 8.00MiB 00:11:20.386 Metadata: DUP 32.00MiB 00:11:20.386 System: DUP 8.00MiB 00:11:20.386 SSD detected: yes 00:11:20.386 Zoned device: no 00:11:20.386 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:20.386 Checksum: crc32c 00:11:20.386 Number of devices: 1 00:11:20.386 Devices: 00:11:20.386 ID SIZE PATH 00:11:20.386 1 510.00MiB /dev/nvme0n1p1 00:11:20.386 00:11:20.386 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:20.386 02:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:21.319 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:21.319 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:21.319 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:21.319 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:21.319 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:21.319 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:21.319 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 164820 00:11:21.319 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:21.319 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:21.319 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:21.319 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:21.319 00:11:21.319 real 0m1.337s 00:11:21.319 user 0m0.017s 00:11:21.319 sys 0m0.138s 00:11:21.319 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.319 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:21.319 ************************************ 00:11:21.319 END TEST filesystem_btrfs 00:11:21.319 ************************************ 00:11:21.319 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:21.319 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:21.320 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.320 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.320 ************************************ 00:11:21.320 START TEST filesystem_xfs 00:11:21.320 ************************************ 00:11:21.320 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:21.320 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:21.320 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:21.320 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:21.320 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:21.320 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:21.320 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:21.320 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:21.320 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:21.320 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:21.320 02:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:21.578 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:21.578 = sectsz=512 attr=2, projid32bit=1 00:11:21.578 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:21.578 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:21.578 data = bsize=4096 blocks=130560, imaxpct=25 00:11:21.578 = sunit=0 swidth=0 blks 00:11:21.578 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:21.578 log =internal log bsize=4096 blocks=16384, version=2 00:11:21.578 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:21.578 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:22.510 Discarding blocks...Done. 00:11:22.510 02:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:22.510 02:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 164820 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:25.036 00:11:25.036 real 0m3.435s 00:11:25.036 user 0m0.021s 00:11:25.036 sys 0m0.088s 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:25.036 ************************************ 00:11:25.036 END TEST filesystem_xfs 00:11:25.036 ************************************ 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 164820 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 164820 ']' 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 164820 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 164820 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 164820' 00:11:25.036 killing process with pid 164820 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 164820 00:11:25.036 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 164820 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:25.603 00:11:25.603 real 0m21.125s 00:11:25.603 user 1m21.953s 00:11:25.603 sys 0m2.648s 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.603 ************************************ 00:11:25.603 END TEST nvmf_filesystem_no_in_capsule 00:11:25.603 ************************************ 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:25.603 ************************************ 00:11:25.603 START TEST nvmf_filesystem_in_capsule 00:11:25.603 ************************************ 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=167467 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 167467 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 167467 ']' 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.603 02:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.603 [2024-11-19 02:52:36.036577] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:11:25.603 [2024-11-19 02:52:36.036657] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.603 [2024-11-19 02:52:36.118858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.603 [2024-11-19 02:52:36.162480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.603 [2024-11-19 02:52:36.162538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.603 [2024-11-19 02:52:36.162566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.603 [2024-11-19 02:52:36.162578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.603 [2024-11-19 02:52:36.162587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.603 [2024-11-19 02:52:36.163990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.603 [2024-11-19 02:52:36.164056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.603 [2024-11-19 02:52:36.164120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.603 [2024-11-19 02:52:36.164123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.862 [2024-11-19 02:52:36.306862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.862 Malloc1 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.862 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.120 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.120 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.120 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.120 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.120 [2024-11-19 02:52:36.488867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.120 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.120 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:26.120 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:26.120 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:26.120 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:26.120 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:26.120 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:26.120 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.120 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.120 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.120 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:26.120 { 00:11:26.120 "name": "Malloc1", 00:11:26.120 "aliases": [ 00:11:26.120 "a6357616-9218-4448-ab47-75e47959f729" 00:11:26.120 ], 00:11:26.120 "product_name": "Malloc disk", 00:11:26.120 "block_size": 512, 00:11:26.120 "num_blocks": 1048576, 00:11:26.120 "uuid": "a6357616-9218-4448-ab47-75e47959f729", 00:11:26.120 "assigned_rate_limits": { 00:11:26.120 "rw_ios_per_sec": 0, 00:11:26.120 "rw_mbytes_per_sec": 0, 00:11:26.120 "r_mbytes_per_sec": 0, 00:11:26.120 "w_mbytes_per_sec": 0 00:11:26.120 }, 00:11:26.120 "claimed": true, 00:11:26.120 "claim_type": "exclusive_write", 00:11:26.120 "zoned": false, 00:11:26.120 "supported_io_types": { 00:11:26.120 "read": true, 00:11:26.120 "write": true, 00:11:26.120 "unmap": true, 00:11:26.120 "flush": true, 00:11:26.120 "reset": true, 00:11:26.120 "nvme_admin": false, 00:11:26.120 "nvme_io": false, 00:11:26.120 "nvme_io_md": false, 00:11:26.120 "write_zeroes": true, 00:11:26.121 "zcopy": true, 00:11:26.121 "get_zone_info": false, 00:11:26.121 "zone_management": false, 00:11:26.121 "zone_append": false, 00:11:26.121 "compare": false, 00:11:26.121 "compare_and_write": false, 00:11:26.121 "abort": true, 00:11:26.121 "seek_hole": false, 00:11:26.121 "seek_data": false, 00:11:26.121 "copy": true, 00:11:26.121 "nvme_iov_md": false 00:11:26.121 }, 00:11:26.121 "memory_domains": [ 00:11:26.121 { 00:11:26.121 "dma_device_id": "system", 00:11:26.121 "dma_device_type": 1 00:11:26.121 }, 00:11:26.121 { 00:11:26.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.121 "dma_device_type": 2 00:11:26.121 } 00:11:26.121 ], 00:11:26.121 "driver_specific": {} 00:11:26.121 } 00:11:26.121 ]' 00:11:26.121 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:26.121 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:26.121 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:26.121 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:26.121 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:26.121 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:26.121 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:26.121 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:26.690 02:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:26.690 02:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:26.690 02:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:26.690 02:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:26.690 02:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:28.590 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:28.590 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:28.590 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:28.590 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:28.590 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:28.590 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:28.590 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:28.590 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:28.590 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:28.590 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:28.590 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:28.590 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:28.590 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:28.590 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:28.590 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:28.590 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:28.590 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:29.155 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:29.413 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:30.346 02:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:30.346 02:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:30.346 02:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:30.346 02:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.346 02:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.346 ************************************ 00:11:30.346 START TEST filesystem_in_capsule_ext4 00:11:30.346 ************************************ 00:11:30.346 02:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:30.346 02:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:30.346 02:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:30.346 02:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:30.346 02:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:30.346 02:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:30.346 02:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:30.346 02:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:30.346 02:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:30.346 02:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:30.346 02:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:30.346 mke2fs 1.47.0 (5-Feb-2023) 00:11:30.604 Discarding device blocks: 0/522240 done 00:11:30.604 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:30.604 Filesystem UUID: 1f4f69b9-edb7-4477-b680-903dc6533a18 00:11:30.604 Superblock backups stored on blocks: 00:11:30.604 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:30.604 00:11:30.604 Allocating group tables: 0/64 done 00:11:30.604 Writing inode tables: 0/64 done 00:11:30.862 Creating journal (8192 blocks): done 00:11:31.942 Writing superblocks and filesystem accounting information: 0/64 done 00:11:31.942 00:11:31.942 02:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:31.942 02:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:38.497 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:38.497 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:38.497 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:38.497 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:38.497 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:38.497 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:38.497 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 167467 00:11:38.497 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:38.497 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:38.497 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:38.497 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:38.497 00:11:38.497 real 0m7.405s 00:11:38.497 user 0m0.020s 00:11:38.497 sys 0m0.059s 00:11:38.497 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.497 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:38.497 ************************************ 00:11:38.497 END TEST filesystem_in_capsule_ext4 00:11:38.497 ************************************ 00:11:38.497 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.498 ************************************ 00:11:38.498 START TEST filesystem_in_capsule_btrfs 00:11:38.498 ************************************ 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:38.498 btrfs-progs v6.8.1 00:11:38.498 See https://btrfs.readthedocs.io for more information. 00:11:38.498 00:11:38.498 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:38.498 NOTE: several default settings have changed in version 5.15, please make sure 00:11:38.498 this does not affect your deployments: 00:11:38.498 - DUP for metadata (-m dup) 00:11:38.498 - enabled no-holes (-O no-holes) 00:11:38.498 - enabled free-space-tree (-R free-space-tree) 00:11:38.498 00:11:38.498 Label: (null) 00:11:38.498 UUID: 2c83a794-edca-4726-a4f5-5da34dacbe47 00:11:38.498 Node size: 16384 00:11:38.498 Sector size: 4096 (CPU page size: 4096) 00:11:38.498 Filesystem size: 510.00MiB 00:11:38.498 Block group profiles: 00:11:38.498 Data: single 8.00MiB 00:11:38.498 Metadata: DUP 32.00MiB 00:11:38.498 System: DUP 8.00MiB 00:11:38.498 SSD detected: yes 00:11:38.498 Zoned device: no 00:11:38.498 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:38.498 Checksum: crc32c 00:11:38.498 Number of devices: 1 00:11:38.498 Devices: 00:11:38.498 ID SIZE PATH 00:11:38.498 1 510.00MiB /dev/nvme0n1p1 00:11:38.498 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 167467 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:38.498 00:11:38.498 real 0m0.495s 00:11:38.498 user 0m0.016s 00:11:38.498 sys 0m0.103s 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:38.498 ************************************ 00:11:38.498 END TEST filesystem_in_capsule_btrfs 00:11:38.498 ************************************ 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.498 ************************************ 00:11:38.498 START TEST filesystem_in_capsule_xfs 00:11:38.498 ************************************ 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:38.498 02:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:38.498 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:38.498 = sectsz=512 attr=2, projid32bit=1 00:11:38.498 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:38.498 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:38.498 data = bsize=4096 blocks=130560, imaxpct=25 00:11:38.498 = sunit=0 swidth=0 blks 00:11:38.498 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:38.498 log =internal log bsize=4096 blocks=16384, version=2 00:11:38.498 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:38.498 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:39.431 Discarding blocks...Done. 00:11:39.431 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:39.431 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 167467 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:41.960 00:11:41.960 real 0m3.280s 00:11:41.960 user 0m0.016s 00:11:41.960 sys 0m0.058s 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:41.960 ************************************ 00:11:41.960 END TEST filesystem_in_capsule_xfs 00:11:41.960 ************************************ 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 167467 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 167467 ']' 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 167467 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 167467 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 167467' 00:11:41.960 killing process with pid 167467 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 167467 00:11:41.960 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 167467 00:11:42.528 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:42.528 00:11:42.528 real 0m16.945s 00:11:42.528 user 1m5.565s 00:11:42.528 sys 0m2.209s 00:11:42.528 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.528 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.528 ************************************ 00:11:42.528 END TEST nvmf_filesystem_in_capsule 00:11:42.528 ************************************ 00:11:42.528 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:42.528 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.528 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:42.528 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.528 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:42.528 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.528 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.528 rmmod nvme_tcp 00:11:42.528 rmmod nvme_fabrics 00:11:42.528 rmmod nvme_keyring 00:11:42.528 02:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.528 02:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:42.528 02:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:42.528 02:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:42.528 02:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.528 02:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.528 02:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.528 02:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:42.528 02:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:42.528 02:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.528 02:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.528 02:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.528 02:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:42.528 02:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.528 02:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.528 02:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.442 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:44.442 00:11:44.442 real 0m43.033s 00:11:44.442 user 2m28.632s 00:11:44.442 sys 0m6.716s 00:11:44.442 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.442 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.442 ************************************ 00:11:44.442 END TEST nvmf_filesystem 00:11:44.442 ************************************ 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:44.702 ************************************ 00:11:44.702 START TEST nvmf_target_discovery 00:11:44.702 ************************************ 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:44.702 * Looking for test storage... 00:11:44.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:44.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.702 --rc genhtml_branch_coverage=1 00:11:44.702 --rc genhtml_function_coverage=1 00:11:44.702 --rc genhtml_legend=1 00:11:44.702 --rc geninfo_all_blocks=1 00:11:44.702 --rc geninfo_unexecuted_blocks=1 00:11:44.702 00:11:44.702 ' 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:44.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.702 --rc genhtml_branch_coverage=1 00:11:44.702 --rc genhtml_function_coverage=1 00:11:44.702 --rc genhtml_legend=1 00:11:44.702 --rc geninfo_all_blocks=1 00:11:44.702 --rc geninfo_unexecuted_blocks=1 00:11:44.702 00:11:44.702 ' 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:44.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.702 --rc genhtml_branch_coverage=1 00:11:44.702 --rc genhtml_function_coverage=1 00:11:44.702 --rc genhtml_legend=1 00:11:44.702 --rc geninfo_all_blocks=1 00:11:44.702 --rc geninfo_unexecuted_blocks=1 00:11:44.702 00:11:44.702 ' 00:11:44.702 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:44.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.702 --rc genhtml_branch_coverage=1 00:11:44.703 --rc genhtml_function_coverage=1 00:11:44.703 --rc genhtml_legend=1 00:11:44.703 --rc geninfo_all_blocks=1 00:11:44.703 --rc geninfo_unexecuted_blocks=1 00:11:44.703 00:11:44.703 ' 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:44.703 02:52:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:47.240 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:47.240 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.240 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:47.241 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:47.241 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:47.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:11:47.241 00:11:47.241 --- 10.0.0.2 ping statistics --- 00:11:47.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.241 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:11:47.241 00:11:47.241 --- 10.0.0.1 ping statistics --- 00:11:47.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.241 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=171618 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 171618 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 171618 ']' 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.241 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.241 [2024-11-19 02:52:57.689891] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:11:47.241 [2024-11-19 02:52:57.689989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.241 [2024-11-19 02:52:57.763358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.241 [2024-11-19 02:52:57.808779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.241 [2024-11-19 02:52:57.808841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.241 [2024-11-19 02:52:57.808870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.242 [2024-11-19 02:52:57.808881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.242 [2024-11-19 02:52:57.808891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.242 [2024-11-19 02:52:57.810351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.242 [2024-11-19 02:52:57.810467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.242 [2024-11-19 02:52:57.810596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.242 [2024-11-19 02:52:57.810600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.501 [2024-11-19 02:52:57.953378] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.501 Null1 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.501 02:52:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.501 [2024-11-19 02:52:58.001749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.501 Null2 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.501 Null3 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.501 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.502 Null4 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.502 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:47.760 00:11:47.760 Discovery Log Number of Records 6, Generation counter 6 00:11:47.760 =====Discovery Log Entry 0====== 00:11:47.760 trtype: tcp 00:11:47.760 adrfam: ipv4 00:11:47.760 subtype: current discovery subsystem 00:11:47.760 treq: not required 00:11:47.760 portid: 0 00:11:47.760 trsvcid: 4420 00:11:47.760 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:47.760 traddr: 10.0.0.2 00:11:47.760 eflags: explicit discovery connections, duplicate discovery information 00:11:47.760 sectype: none 00:11:47.760 =====Discovery Log Entry 1====== 00:11:47.760 trtype: tcp 00:11:47.760 adrfam: ipv4 00:11:47.760 subtype: nvme subsystem 00:11:47.760 treq: not required 00:11:47.760 portid: 0 00:11:47.760 trsvcid: 4420 00:11:47.760 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:47.760 traddr: 10.0.0.2 00:11:47.760 eflags: none 00:11:47.760 sectype: none 00:11:47.760 =====Discovery Log Entry 2====== 00:11:47.760 trtype: tcp 00:11:47.760 adrfam: ipv4 00:11:47.760 subtype: nvme subsystem 00:11:47.760 treq: not required 00:11:47.760 portid: 0 00:11:47.760 trsvcid: 4420 00:11:47.760 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:47.760 traddr: 10.0.0.2 00:11:47.760 eflags: none 00:11:47.760 sectype: none 00:11:47.760 =====Discovery Log Entry 3====== 00:11:47.760 trtype: tcp 00:11:47.760 adrfam: ipv4 00:11:47.760 subtype: nvme subsystem 00:11:47.760 treq: not required 00:11:47.760 portid: 0 00:11:47.760 trsvcid: 4420 00:11:47.760 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:47.760 traddr: 10.0.0.2 00:11:47.760 eflags: none 00:11:47.760 sectype: none 00:11:47.760 =====Discovery Log Entry 4====== 00:11:47.760 trtype: tcp 00:11:47.760 adrfam: ipv4 00:11:47.760 subtype: nvme subsystem 00:11:47.760 treq: not required 00:11:47.760 portid: 0 00:11:47.760 trsvcid: 4420 00:11:47.760 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:47.760 traddr: 10.0.0.2 00:11:47.760 eflags: none 00:11:47.760 sectype: none 00:11:47.760 =====Discovery Log Entry 5====== 00:11:47.760 trtype: tcp 00:11:47.760 adrfam: ipv4 00:11:47.760 subtype: discovery subsystem referral 00:11:47.760 treq: not required 00:11:47.760 portid: 0 00:11:47.760 trsvcid: 4430 00:11:47.760 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:47.760 traddr: 10.0.0.2 00:11:47.760 eflags: none 00:11:47.760 sectype: none 00:11:47.760 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:47.760 Perform nvmf subsystem discovery via RPC 00:11:47.760 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:47.760 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.760 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.760 [ 00:11:47.760 { 00:11:47.760 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:47.761 "subtype": "Discovery", 00:11:47.761 "listen_addresses": [ 00:11:47.761 { 00:11:47.761 "trtype": "TCP", 00:11:47.761 "adrfam": "IPv4", 00:11:47.761 "traddr": "10.0.0.2", 00:11:47.761 "trsvcid": "4420" 00:11:47.761 } 00:11:47.761 ], 00:11:47.761 "allow_any_host": true, 00:11:47.761 "hosts": [] 00:11:47.761 }, 00:11:47.761 { 00:11:47.761 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:47.761 "subtype": "NVMe", 00:11:47.761 "listen_addresses": [ 00:11:47.761 { 00:11:47.761 "trtype": "TCP", 00:11:47.761 "adrfam": "IPv4", 00:11:47.761 "traddr": "10.0.0.2", 00:11:47.761 "trsvcid": "4420" 00:11:47.761 } 00:11:47.761 ], 00:11:47.761 "allow_any_host": true, 00:11:47.761 "hosts": [], 00:11:47.761 "serial_number": "SPDK00000000000001", 00:11:47.761 "model_number": "SPDK bdev Controller", 00:11:47.761 "max_namespaces": 32, 00:11:47.761 "min_cntlid": 1, 00:11:47.761 "max_cntlid": 65519, 00:11:47.761 "namespaces": [ 00:11:47.761 { 00:11:47.761 "nsid": 1, 00:11:47.761 "bdev_name": "Null1", 00:11:47.761 "name": "Null1", 00:11:47.761 "nguid": "919988E401BB4E89812EDADDAEA6E8E2", 00:11:47.761 "uuid": "919988e4-01bb-4e89-812e-daddaea6e8e2" 00:11:47.761 } 00:11:47.761 ] 00:11:47.761 }, 00:11:47.761 { 00:11:47.761 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:47.761 "subtype": "NVMe", 00:11:47.761 "listen_addresses": [ 00:11:47.761 { 00:11:47.761 "trtype": "TCP", 00:11:47.761 "adrfam": "IPv4", 00:11:47.761 "traddr": "10.0.0.2", 00:11:47.761 "trsvcid": "4420" 00:11:47.761 } 00:11:47.761 ], 00:11:47.761 "allow_any_host": true, 00:11:47.761 "hosts": [], 00:11:47.761 "serial_number": "SPDK00000000000002", 00:11:47.761 "model_number": "SPDK bdev Controller", 00:11:47.761 "max_namespaces": 32, 00:11:47.761 "min_cntlid": 1, 00:11:47.761 "max_cntlid": 65519, 00:11:47.761 "namespaces": [ 00:11:47.761 { 00:11:47.761 "nsid": 1, 00:11:47.761 "bdev_name": "Null2", 00:11:47.761 "name": "Null2", 00:11:47.761 "nguid": "F7B87E4A147E43F4ACFC7CA93E9694F3", 00:11:47.761 "uuid": "f7b87e4a-147e-43f4-acfc-7ca93e9694f3" 00:11:47.761 } 00:11:47.761 ] 00:11:47.761 }, 00:11:47.761 { 00:11:47.761 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:47.761 "subtype": "NVMe", 00:11:47.761 "listen_addresses": [ 00:11:47.761 { 00:11:47.761 "trtype": "TCP", 00:11:47.761 "adrfam": "IPv4", 00:11:47.761 "traddr": "10.0.0.2", 00:11:47.761 "trsvcid": "4420" 00:11:47.761 } 00:11:47.761 ], 00:11:47.761 "allow_any_host": true, 00:11:47.761 "hosts": [], 00:11:47.761 "serial_number": "SPDK00000000000003", 00:11:47.761 "model_number": "SPDK bdev Controller", 00:11:47.761 "max_namespaces": 32, 00:11:47.761 "min_cntlid": 1, 00:11:47.761 "max_cntlid": 65519, 00:11:47.761 "namespaces": [ 00:11:47.761 { 00:11:47.761 "nsid": 1, 00:11:47.761 "bdev_name": "Null3", 00:11:47.761 "name": "Null3", 00:11:47.761 "nguid": "F5E52BEDEE82421984FF32FD48CDEAE1", 00:11:47.761 "uuid": "f5e52bed-ee82-4219-84ff-32fd48cdeae1" 00:11:47.761 } 00:11:47.761 ] 00:11:47.761 }, 00:11:47.761 { 00:11:47.761 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:47.761 "subtype": "NVMe", 00:11:47.761 "listen_addresses": [ 00:11:47.761 { 00:11:47.761 "trtype": "TCP", 00:11:47.761 "adrfam": "IPv4", 00:11:47.761 "traddr": "10.0.0.2", 00:11:47.761 "trsvcid": "4420" 00:11:47.761 } 00:11:47.761 ], 00:11:47.761 "allow_any_host": true, 00:11:47.761 "hosts": [], 00:11:47.761 "serial_number": "SPDK00000000000004", 00:11:47.761 "model_number": "SPDK bdev Controller", 00:11:47.761 "max_namespaces": 32, 00:11:47.761 "min_cntlid": 1, 00:11:47.761 "max_cntlid": 65519, 00:11:47.761 "namespaces": [ 00:11:47.761 { 00:11:47.761 "nsid": 1, 00:11:47.761 "bdev_name": "Null4", 00:11:47.761 "name": "Null4", 00:11:47.761 "nguid": "82E239E7254C4C2ABA5B3ACAAA7768D9", 00:11:47.761 "uuid": "82e239e7-254c-4c2a-ba5b-3acaaa7768d9" 00:11:47.761 } 00:11:47.761 ] 00:11:47.761 } 00:11:47.761 ] 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.761 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.762 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:48.020 rmmod nvme_tcp 00:11:48.020 rmmod nvme_fabrics 00:11:48.020 rmmod nvme_keyring 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 171618 ']' 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 171618 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 171618 ']' 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 171618 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 171618 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 171618' 00:11:48.020 killing process with pid 171618 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 171618 00:11:48.020 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 171618 00:11:48.279 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:48.279 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:48.279 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:48.279 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:48.279 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:48.279 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:48.279 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:48.279 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:48.279 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:48.279 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.279 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.279 02:52:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.188 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:50.188 00:11:50.188 real 0m5.614s 00:11:50.188 user 0m4.483s 00:11:50.188 sys 0m1.988s 00:11:50.188 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.188 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:50.188 ************************************ 00:11:50.188 END TEST nvmf_target_discovery 00:11:50.188 ************************************ 00:11:50.188 02:53:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:50.188 02:53:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:50.188 02:53:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.188 02:53:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:50.188 ************************************ 00:11:50.188 START TEST nvmf_referrals 00:11:50.188 ************************************ 00:11:50.188 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:50.448 * Looking for test storage... 00:11:50.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:50.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.448 --rc genhtml_branch_coverage=1 00:11:50.448 --rc genhtml_function_coverage=1 00:11:50.448 --rc genhtml_legend=1 00:11:50.448 --rc geninfo_all_blocks=1 00:11:50.448 --rc geninfo_unexecuted_blocks=1 00:11:50.448 00:11:50.448 ' 00:11:50.448 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:50.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.448 --rc genhtml_branch_coverage=1 00:11:50.448 --rc genhtml_function_coverage=1 00:11:50.448 --rc genhtml_legend=1 00:11:50.449 --rc geninfo_all_blocks=1 00:11:50.449 --rc geninfo_unexecuted_blocks=1 00:11:50.449 00:11:50.449 ' 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:50.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.449 --rc genhtml_branch_coverage=1 00:11:50.449 --rc genhtml_function_coverage=1 00:11:50.449 --rc genhtml_legend=1 00:11:50.449 --rc geninfo_all_blocks=1 00:11:50.449 --rc geninfo_unexecuted_blocks=1 00:11:50.449 00:11:50.449 ' 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:50.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.449 --rc genhtml_branch_coverage=1 00:11:50.449 --rc genhtml_function_coverage=1 00:11:50.449 --rc genhtml_legend=1 00:11:50.449 --rc geninfo_all_blocks=1 00:11:50.449 --rc geninfo_unexecuted_blocks=1 00:11:50.449 00:11:50.449 ' 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:50.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:50.449 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:52.988 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:52.988 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:52.988 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:52.988 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.988 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:52.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:11:52.989 00:11:52.989 --- 10.0.0.2 ping statistics --- 00:11:52.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.989 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:11:52.989 00:11:52.989 --- 10.0.0.1 ping statistics --- 00:11:52.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.989 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=173720 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 173720 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 173720 ']' 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.989 [2024-11-19 02:53:03.289387] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:11:52.989 [2024-11-19 02:53:03.289478] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.989 [2024-11-19 02:53:03.364297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.989 [2024-11-19 02:53:03.413717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.989 [2024-11-19 02:53:03.413774] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.989 [2024-11-19 02:53:03.413790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.989 [2024-11-19 02:53:03.413802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.989 [2024-11-19 02:53:03.413813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.989 [2024-11-19 02:53:03.415348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.989 [2024-11-19 02:53:03.415929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.989 [2024-11-19 02:53:03.415991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.989 [2024-11-19 02:53:03.415974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.989 [2024-11-19 02:53:03.561608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.989 [2024-11-19 02:53:03.573901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.989 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:53.248 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:53.506 02:53:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:53.764 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:53.765 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:53.765 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:53.765 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:53.765 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:54.022 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:54.023 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:54.023 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:54.023 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:54.023 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:54.023 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.023 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:54.023 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:54.023 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:54.023 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:54.023 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:54.023 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.023 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:54.281 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:54.539 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:54.539 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:54.539 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:54.539 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:54.539 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:54.539 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.539 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:54.539 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:54.539 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:54.539 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:54.539 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:54.539 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.539 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:54.797 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:54.797 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:54.797 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.797 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.797 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.797 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.797 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.797 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:54.797 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.797 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.797 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:54.797 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:54.797 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:54.797 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:54.797 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.797 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:54.797 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:55.056 rmmod nvme_tcp 00:11:55.056 rmmod nvme_fabrics 00:11:55.056 rmmod nvme_keyring 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 173720 ']' 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 173720 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 173720 ']' 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 173720 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 173720 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 173720' 00:11:55.056 killing process with pid 173720 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 173720 00:11:55.056 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 173720 00:11:55.317 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:55.317 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:55.317 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:55.317 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:55.317 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:55.317 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:55.317 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:55.317 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:55.317 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:55.317 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.317 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.317 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.227 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:57.227 00:11:57.227 real 0m7.021s 00:11:57.227 user 0m10.817s 00:11:57.227 sys 0m2.334s 00:11:57.227 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.227 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.227 ************************************ 00:11:57.227 END TEST nvmf_referrals 00:11:57.227 ************************************ 00:11:57.227 02:53:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:57.227 02:53:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:57.227 02:53:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.227 02:53:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:57.487 ************************************ 00:11:57.487 START TEST nvmf_connect_disconnect 00:11:57.487 ************************************ 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:57.487 * Looking for test storage... 00:11:57.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:57.487 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:57.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.488 --rc genhtml_branch_coverage=1 00:11:57.488 --rc genhtml_function_coverage=1 00:11:57.488 --rc genhtml_legend=1 00:11:57.488 --rc geninfo_all_blocks=1 00:11:57.488 --rc geninfo_unexecuted_blocks=1 00:11:57.488 00:11:57.488 ' 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:57.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.488 --rc genhtml_branch_coverage=1 00:11:57.488 --rc genhtml_function_coverage=1 00:11:57.488 --rc genhtml_legend=1 00:11:57.488 --rc geninfo_all_blocks=1 00:11:57.488 --rc geninfo_unexecuted_blocks=1 00:11:57.488 00:11:57.488 ' 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:57.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.488 --rc genhtml_branch_coverage=1 00:11:57.488 --rc genhtml_function_coverage=1 00:11:57.488 --rc genhtml_legend=1 00:11:57.488 --rc geninfo_all_blocks=1 00:11:57.488 --rc geninfo_unexecuted_blocks=1 00:11:57.488 00:11:57.488 ' 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:57.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.488 --rc genhtml_branch_coverage=1 00:11:57.488 --rc genhtml_function_coverage=1 00:11:57.488 --rc genhtml_legend=1 00:11:57.488 --rc geninfo_all_blocks=1 00:11:57.488 --rc geninfo_unexecuted_blocks=1 00:11:57.488 00:11:57.488 ' 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.488 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:57.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:57.488 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:57.489 02:53:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:00.019 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.019 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:00.020 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:00.020 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:00.020 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:00.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:12:00.020 00:12:00.020 --- 10.0.0.2 ping statistics --- 00:12:00.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.020 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:12:00.020 00:12:00.020 --- 10.0.0.1 ping statistics --- 00:12:00.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.020 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=176137 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 176137 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 176137 ']' 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.020 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.020 [2024-11-19 02:53:10.373424] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:12:00.020 [2024-11-19 02:53:10.373532] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.020 [2024-11-19 02:53:10.449839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.020 [2024-11-19 02:53:10.500415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.020 [2024-11-19 02:53:10.500471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.020 [2024-11-19 02:53:10.500500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.021 [2024-11-19 02:53:10.500512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.021 [2024-11-19 02:53:10.500522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.021 [2024-11-19 02:53:10.502201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.021 [2024-11-19 02:53:10.502274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.021 [2024-11-19 02:53:10.502335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.021 [2024-11-19 02:53:10.502338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.279 [2024-11-19 02:53:10.694055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.279 [2024-11-19 02:53:10.771743] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:00.279 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:02.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:51.600 rmmod nvme_tcp 00:15:51.600 rmmod nvme_fabrics 00:15:51.600 rmmod nvme_keyring 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 176137 ']' 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 176137 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 176137 ']' 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 176137 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 176137 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 176137' 00:15:51.600 killing process with pid 176137 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 176137 00:15:51.600 02:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 176137 00:15:51.600 02:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:51.600 02:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:51.600 02:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:51.600 02:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:15:51.600 02:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:15:51.600 02:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:15:51.600 02:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:51.601 02:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:51.601 02:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:51.601 02:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.601 02:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.601 02:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:54.143 00:15:54.143 real 3m56.373s 00:15:54.143 user 14m59.462s 00:15:54.143 sys 0m36.320s 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:54.143 ************************************ 00:15:54.143 END TEST nvmf_connect_disconnect 00:15:54.143 ************************************ 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.143 ************************************ 00:15:54.143 START TEST nvmf_multitarget 00:15:54.143 ************************************ 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:54.143 * Looking for test storage... 00:15:54.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:54.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.143 --rc genhtml_branch_coverage=1 00:15:54.143 --rc genhtml_function_coverage=1 00:15:54.143 --rc genhtml_legend=1 00:15:54.143 --rc geninfo_all_blocks=1 00:15:54.143 --rc geninfo_unexecuted_blocks=1 00:15:54.143 00:15:54.143 ' 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:54.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.143 --rc genhtml_branch_coverage=1 00:15:54.143 --rc genhtml_function_coverage=1 00:15:54.143 --rc genhtml_legend=1 00:15:54.143 --rc geninfo_all_blocks=1 00:15:54.143 --rc geninfo_unexecuted_blocks=1 00:15:54.143 00:15:54.143 ' 00:15:54.143 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:54.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.143 --rc genhtml_branch_coverage=1 00:15:54.143 --rc genhtml_function_coverage=1 00:15:54.143 --rc genhtml_legend=1 00:15:54.143 --rc geninfo_all_blocks=1 00:15:54.143 --rc geninfo_unexecuted_blocks=1 00:15:54.144 00:15:54.144 ' 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:54.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.144 --rc genhtml_branch_coverage=1 00:15:54.144 --rc genhtml_function_coverage=1 00:15:54.144 --rc genhtml_legend=1 00:15:54.144 --rc geninfo_all_blocks=1 00:15:54.144 --rc geninfo_unexecuted_blocks=1 00:15:54.144 00:15:54.144 ' 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:15:54.144 02:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:56.048 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:56.048 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:56.048 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:56.048 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:56.049 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:56.049 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.307 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.307 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.307 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:56.307 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.307 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.307 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.307 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:56.307 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:56.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:15:56.307 00:15:56.307 --- 10.0.0.2 ping statistics --- 00:15:56.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.307 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:15:56.307 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:15:56.307 00:15:56.307 --- 10.0.0.1 ping statistics --- 00:15:56.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.307 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:15:56.307 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.307 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=207254 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 207254 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 207254 ']' 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.308 02:57:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:56.308 [2024-11-19 02:57:06.845520] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:15:56.308 [2024-11-19 02:57:06.845620] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.566 [2024-11-19 02:57:06.927913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.566 [2024-11-19 02:57:06.977274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.566 [2024-11-19 02:57:06.977352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.566 [2024-11-19 02:57:06.977366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.566 [2024-11-19 02:57:06.977377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.566 [2024-11-19 02:57:06.977386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.566 [2024-11-19 02:57:06.979156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.566 [2024-11-19 02:57:06.979219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.566 [2024-11-19 02:57:06.979244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.566 [2024-11-19 02:57:06.979247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.566 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.566 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:15:56.566 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:56.566 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:56.566 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:56.566 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.566 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:56.566 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:56.566 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:56.824 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:56.824 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:56.824 "nvmf_tgt_1" 00:15:56.824 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:57.082 "nvmf_tgt_2" 00:15:57.082 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:57.082 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:57.082 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:57.082 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:57.340 true 00:15:57.340 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:57.340 true 00:15:57.340 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:57.340 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:57.597 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:57.597 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:57.597 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:57.597 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:57.597 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:15:57.597 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:57.597 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:15:57.597 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:57.597 02:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:57.597 rmmod nvme_tcp 00:15:57.597 rmmod nvme_fabrics 00:15:57.597 rmmod nvme_keyring 00:15:57.597 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:57.597 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:15:57.597 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:15:57.597 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 207254 ']' 00:15:57.597 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 207254 00:15:57.597 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 207254 ']' 00:15:57.597 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 207254 00:15:57.597 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:15:57.597 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.597 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 207254 00:15:57.597 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:57.597 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:57.597 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 207254' 00:15:57.597 killing process with pid 207254 00:15:57.597 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 207254 00:15:57.597 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 207254 00:15:57.858 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:57.858 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:57.858 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:57.858 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:15:57.858 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:15:57.858 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:57.858 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:15:57.858 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:57.858 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:57.858 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.858 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.858 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.758 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:59.758 00:15:59.758 real 0m6.034s 00:15:59.758 user 0m6.909s 00:15:59.758 sys 0m2.104s 00:15:59.758 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.758 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:59.758 ************************************ 00:15:59.758 END TEST nvmf_multitarget 00:15:59.758 ************************************ 00:15:59.758 02:57:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:59.758 02:57:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:59.758 02:57:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.758 02:57:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:59.758 ************************************ 00:15:59.758 START TEST nvmf_rpc 00:15:59.758 ************************************ 00:15:59.758 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:00.018 * Looking for test storage... 00:16:00.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:00.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.018 --rc genhtml_branch_coverage=1 00:16:00.018 --rc genhtml_function_coverage=1 00:16:00.018 --rc genhtml_legend=1 00:16:00.018 --rc geninfo_all_blocks=1 00:16:00.018 --rc geninfo_unexecuted_blocks=1 00:16:00.018 00:16:00.018 ' 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:00.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.018 --rc genhtml_branch_coverage=1 00:16:00.018 --rc genhtml_function_coverage=1 00:16:00.018 --rc genhtml_legend=1 00:16:00.018 --rc geninfo_all_blocks=1 00:16:00.018 --rc geninfo_unexecuted_blocks=1 00:16:00.018 00:16:00.018 ' 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:00.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.018 --rc genhtml_branch_coverage=1 00:16:00.018 --rc genhtml_function_coverage=1 00:16:00.018 --rc genhtml_legend=1 00:16:00.018 --rc geninfo_all_blocks=1 00:16:00.018 --rc geninfo_unexecuted_blocks=1 00:16:00.018 00:16:00.018 ' 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:00.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.018 --rc genhtml_branch_coverage=1 00:16:00.018 --rc genhtml_function_coverage=1 00:16:00.018 --rc genhtml_legend=1 00:16:00.018 --rc geninfo_all_blocks=1 00:16:00.018 --rc geninfo_unexecuted_blocks=1 00:16:00.018 00:16:00.018 ' 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:00.018 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:00.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:00.019 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:02.550 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:02.550 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:02.550 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:02.551 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:02.551 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:02.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:16:02.551 00:16:02.551 --- 10.0.0.2 ping statistics --- 00:16:02.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.551 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:02.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:16:02.551 00:16:02.551 --- 10.0.0.1 ping statistics --- 00:16:02.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.551 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=209968 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 209968 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 209968 ']' 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.551 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.551 [2024-11-19 02:57:12.857179] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:16:02.551 [2024-11-19 02:57:12.857261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.551 [2024-11-19 02:57:12.930459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:02.551 [2024-11-19 02:57:12.979494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.551 [2024-11-19 02:57:12.979555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.551 [2024-11-19 02:57:12.979568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.551 [2024-11-19 02:57:12.979579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.551 [2024-11-19 02:57:12.979588] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.551 [2024-11-19 02:57:12.981186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.551 [2024-11-19 02:57:12.981251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:02.551 [2024-11-19 02:57:12.981304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:02.551 [2024-11-19 02:57:12.981307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.551 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.551 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:02.551 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:02.551 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:02.551 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.551 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:02.551 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:02.551 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.551 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.551 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.551 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:02.551 "tick_rate": 2700000000, 00:16:02.551 "poll_groups": [ 00:16:02.551 { 00:16:02.552 "name": "nvmf_tgt_poll_group_000", 00:16:02.552 "admin_qpairs": 0, 00:16:02.552 "io_qpairs": 0, 00:16:02.552 "current_admin_qpairs": 0, 00:16:02.552 "current_io_qpairs": 0, 00:16:02.552 "pending_bdev_io": 0, 00:16:02.552 "completed_nvme_io": 0, 00:16:02.552 "transports": [] 00:16:02.552 }, 00:16:02.552 { 00:16:02.552 "name": "nvmf_tgt_poll_group_001", 00:16:02.552 "admin_qpairs": 0, 00:16:02.552 "io_qpairs": 0, 00:16:02.552 "current_admin_qpairs": 0, 00:16:02.552 "current_io_qpairs": 0, 00:16:02.552 "pending_bdev_io": 0, 00:16:02.552 "completed_nvme_io": 0, 00:16:02.552 "transports": [] 00:16:02.552 }, 00:16:02.552 { 00:16:02.552 "name": "nvmf_tgt_poll_group_002", 00:16:02.552 "admin_qpairs": 0, 00:16:02.552 "io_qpairs": 0, 00:16:02.552 "current_admin_qpairs": 0, 00:16:02.552 "current_io_qpairs": 0, 00:16:02.552 "pending_bdev_io": 0, 00:16:02.552 "completed_nvme_io": 0, 00:16:02.552 "transports": [] 00:16:02.552 }, 00:16:02.552 { 00:16:02.552 "name": "nvmf_tgt_poll_group_003", 00:16:02.552 "admin_qpairs": 0, 00:16:02.552 "io_qpairs": 0, 00:16:02.552 "current_admin_qpairs": 0, 00:16:02.552 "current_io_qpairs": 0, 00:16:02.552 "pending_bdev_io": 0, 00:16:02.552 "completed_nvme_io": 0, 00:16:02.552 "transports": [] 00:16:02.552 } 00:16:02.552 ] 00:16:02.552 }' 00:16:02.552 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:02.552 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:02.552 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:02.552 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.811 [2024-11-19 02:57:13.223419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:02.811 "tick_rate": 2700000000, 00:16:02.811 "poll_groups": [ 00:16:02.811 { 00:16:02.811 "name": "nvmf_tgt_poll_group_000", 00:16:02.811 "admin_qpairs": 0, 00:16:02.811 "io_qpairs": 0, 00:16:02.811 "current_admin_qpairs": 0, 00:16:02.811 "current_io_qpairs": 0, 00:16:02.811 "pending_bdev_io": 0, 00:16:02.811 "completed_nvme_io": 0, 00:16:02.811 "transports": [ 00:16:02.811 { 00:16:02.811 "trtype": "TCP" 00:16:02.811 } 00:16:02.811 ] 00:16:02.811 }, 00:16:02.811 { 00:16:02.811 "name": "nvmf_tgt_poll_group_001", 00:16:02.811 "admin_qpairs": 0, 00:16:02.811 "io_qpairs": 0, 00:16:02.811 "current_admin_qpairs": 0, 00:16:02.811 "current_io_qpairs": 0, 00:16:02.811 "pending_bdev_io": 0, 00:16:02.811 "completed_nvme_io": 0, 00:16:02.811 "transports": [ 00:16:02.811 { 00:16:02.811 "trtype": "TCP" 00:16:02.811 } 00:16:02.811 ] 00:16:02.811 }, 00:16:02.811 { 00:16:02.811 "name": "nvmf_tgt_poll_group_002", 00:16:02.811 "admin_qpairs": 0, 00:16:02.811 "io_qpairs": 0, 00:16:02.811 "current_admin_qpairs": 0, 00:16:02.811 "current_io_qpairs": 0, 00:16:02.811 "pending_bdev_io": 0, 00:16:02.811 "completed_nvme_io": 0, 00:16:02.811 "transports": [ 00:16:02.811 { 00:16:02.811 "trtype": "TCP" 00:16:02.811 } 00:16:02.811 ] 00:16:02.811 }, 00:16:02.811 { 00:16:02.811 "name": "nvmf_tgt_poll_group_003", 00:16:02.811 "admin_qpairs": 0, 00:16:02.811 "io_qpairs": 0, 00:16:02.811 "current_admin_qpairs": 0, 00:16:02.811 "current_io_qpairs": 0, 00:16:02.811 "pending_bdev_io": 0, 00:16:02.811 "completed_nvme_io": 0, 00:16:02.811 "transports": [ 00:16:02.811 { 00:16:02.811 "trtype": "TCP" 00:16:02.811 } 00:16:02.811 ] 00:16:02.811 } 00:16:02.811 ] 00:16:02.811 }' 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.811 Malloc1 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.811 [2024-11-19 02:57:13.388527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:02.811 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:02.812 [2024-11-19 02:57:13.411071] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:03.070 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:03.070 could not add new controller: failed to write to nvme-fabrics device 00:16:03.070 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:03.070 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:03.070 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:03.070 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:03.070 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.070 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.070 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.070 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.070 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:03.635 02:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:03.635 02:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:03.635 02:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:03.635 02:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:03.635 02:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:05.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:05.533 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:05.791 [2024-11-19 02:57:16.160704] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:05.791 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:05.791 could not add new controller: failed to write to nvme-fabrics device 00:16:05.791 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:05.791 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:05.791 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:05.791 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:05.791 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:05.791 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.791 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.791 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.791 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:06.356 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:06.356 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:06.356 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:06.356 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:06.356 02:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:08.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.883 [2024-11-19 02:57:18.983647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:08.883 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.884 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.884 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.884 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:09.141 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:09.141 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:09.141 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:09.141 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:09.141 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:11.669 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:11.669 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:11.669 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:11.669 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:11.669 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:11.669 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:11.669 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:11.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.669 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:11.669 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:11.669 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:11.669 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.669 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:11.669 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.669 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:11.669 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.670 [2024-11-19 02:57:21.853819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:11.928 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:11.928 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:11.928 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:11.928 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:11.928 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:14.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.455 [2024-11-19 02:57:24.631971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.455 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:15.021 02:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:15.021 02:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:15.021 02:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.021 02:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:15.021 02:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:16.919 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:16.919 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:16.919 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:16.919 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:16.919 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:16.919 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:16.919 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:16.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.919 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:16.919 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:16.919 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:16.919 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:16.919 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:16.919 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:16.919 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:16.919 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:16.919 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.920 [2024-11-19 02:57:27.446577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.920 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:17.852 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:17.852 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:17.852 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:17.852 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:17.852 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:19.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.749 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.750 [2024-11-19 02:57:30.265977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.750 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:20.315 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:20.315 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:20.315 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:20.315 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:20.315 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:22.843 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:22.843 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:22.843 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:22.843 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:22.843 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.843 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:22.843 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.843 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:22.843 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:22.843 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:22.843 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.843 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:22.843 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.843 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 [2024-11-19 02:57:33.049642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 [2024-11-19 02:57:33.097735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 [2024-11-19 02:57:33.145906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 [2024-11-19 02:57:33.194077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.844 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.845 [2024-11-19 02:57:33.242244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:22.845 "tick_rate": 2700000000, 00:16:22.845 "poll_groups": [ 00:16:22.845 { 00:16:22.845 "name": "nvmf_tgt_poll_group_000", 00:16:22.845 "admin_qpairs": 2, 00:16:22.845 "io_qpairs": 84, 00:16:22.845 "current_admin_qpairs": 0, 00:16:22.845 "current_io_qpairs": 0, 00:16:22.845 "pending_bdev_io": 0, 00:16:22.845 "completed_nvme_io": 184, 00:16:22.845 "transports": [ 00:16:22.845 { 00:16:22.845 "trtype": "TCP" 00:16:22.845 } 00:16:22.845 ] 00:16:22.845 }, 00:16:22.845 { 00:16:22.845 "name": "nvmf_tgt_poll_group_001", 00:16:22.845 "admin_qpairs": 2, 00:16:22.845 "io_qpairs": 84, 00:16:22.845 "current_admin_qpairs": 0, 00:16:22.845 "current_io_qpairs": 0, 00:16:22.845 "pending_bdev_io": 0, 00:16:22.845 "completed_nvme_io": 231, 00:16:22.845 "transports": [ 00:16:22.845 { 00:16:22.845 "trtype": "TCP" 00:16:22.845 } 00:16:22.845 ] 00:16:22.845 }, 00:16:22.845 { 00:16:22.845 "name": "nvmf_tgt_poll_group_002", 00:16:22.845 "admin_qpairs": 1, 00:16:22.845 "io_qpairs": 84, 00:16:22.845 "current_admin_qpairs": 0, 00:16:22.845 "current_io_qpairs": 0, 00:16:22.845 "pending_bdev_io": 0, 00:16:22.845 "completed_nvme_io": 136, 00:16:22.845 "transports": [ 00:16:22.845 { 00:16:22.845 "trtype": "TCP" 00:16:22.845 } 00:16:22.845 ] 00:16:22.845 }, 00:16:22.845 { 00:16:22.845 "name": "nvmf_tgt_poll_group_003", 00:16:22.845 "admin_qpairs": 2, 00:16:22.845 "io_qpairs": 84, 00:16:22.845 "current_admin_qpairs": 0, 00:16:22.845 "current_io_qpairs": 0, 00:16:22.845 "pending_bdev_io": 0, 00:16:22.845 "completed_nvme_io": 135, 00:16:22.845 "transports": [ 00:16:22.845 { 00:16:22.845 "trtype": "TCP" 00:16:22.845 } 00:16:22.845 ] 00:16:22.845 } 00:16:22.845 ] 00:16:22.845 }' 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:22.845 rmmod nvme_tcp 00:16:22.845 rmmod nvme_fabrics 00:16:22.845 rmmod nvme_keyring 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 209968 ']' 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 209968 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 209968 ']' 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 209968 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 209968 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 209968' 00:16:22.845 killing process with pid 209968 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 209968 00:16:22.845 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 209968 00:16:23.105 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:23.105 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:23.105 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:23.105 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:23.105 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:23.105 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:23.105 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:23.105 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:23.105 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:23.105 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.105 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.105 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.096 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:25.096 00:16:25.096 real 0m25.347s 00:16:25.096 user 1m22.029s 00:16:25.096 sys 0m4.355s 00:16:25.096 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.096 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.096 ************************************ 00:16:25.096 END TEST nvmf_rpc 00:16:25.096 ************************************ 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:25.383 ************************************ 00:16:25.383 START TEST nvmf_invalid 00:16:25.383 ************************************ 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:25.383 * Looking for test storage... 00:16:25.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:25.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.383 --rc genhtml_branch_coverage=1 00:16:25.383 --rc genhtml_function_coverage=1 00:16:25.383 --rc genhtml_legend=1 00:16:25.383 --rc geninfo_all_blocks=1 00:16:25.383 --rc geninfo_unexecuted_blocks=1 00:16:25.383 00:16:25.383 ' 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:25.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.383 --rc genhtml_branch_coverage=1 00:16:25.383 --rc genhtml_function_coverage=1 00:16:25.383 --rc genhtml_legend=1 00:16:25.383 --rc geninfo_all_blocks=1 00:16:25.383 --rc geninfo_unexecuted_blocks=1 00:16:25.383 00:16:25.383 ' 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:25.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.383 --rc genhtml_branch_coverage=1 00:16:25.383 --rc genhtml_function_coverage=1 00:16:25.383 --rc genhtml_legend=1 00:16:25.383 --rc geninfo_all_blocks=1 00:16:25.383 --rc geninfo_unexecuted_blocks=1 00:16:25.383 00:16:25.383 ' 00:16:25.383 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:25.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.383 --rc genhtml_branch_coverage=1 00:16:25.384 --rc genhtml_function_coverage=1 00:16:25.384 --rc genhtml_legend=1 00:16:25.384 --rc geninfo_all_blocks=1 00:16:25.384 --rc geninfo_unexecuted_blocks=1 00:16:25.384 00:16:25.384 ' 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:25.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:25.384 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:28.073 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:28.073 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:28.073 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:28.073 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:28.074 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:28.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:16:28.074 00:16:28.074 --- 10.0.0.2 ping statistics --- 00:16:28.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.074 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:28.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:16:28.074 00:16:28.074 --- 10.0.0.1 ping statistics --- 00:16:28.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.074 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=214480 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 214480 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 214480 ']' 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:28.074 [2024-11-19 02:57:38.298472] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:16:28.074 [2024-11-19 02:57:38.298557] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.074 [2024-11-19 02:57:38.371293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:28.074 [2024-11-19 02:57:38.419419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.074 [2024-11-19 02:57:38.419476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.074 [2024-11-19 02:57:38.419490] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.074 [2024-11-19 02:57:38.419502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.074 [2024-11-19 02:57:38.419512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.074 [2024-11-19 02:57:38.420972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.074 [2024-11-19 02:57:38.421037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.074 [2024-11-19 02:57:38.421099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:28.074 [2024-11-19 02:57:38.421102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:28.074 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25641 00:16:28.372 [2024-11-19 02:57:38.870309] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:28.372 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:28.372 { 00:16:28.372 "nqn": "nqn.2016-06.io.spdk:cnode25641", 00:16:28.372 "tgt_name": "foobar", 00:16:28.372 "method": "nvmf_create_subsystem", 00:16:28.372 "req_id": 1 00:16:28.372 } 00:16:28.372 Got JSON-RPC error response 00:16:28.372 response: 00:16:28.372 { 00:16:28.372 "code": -32603, 00:16:28.372 "message": "Unable to find target foobar" 00:16:28.372 }' 00:16:28.372 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:28.372 { 00:16:28.372 "nqn": "nqn.2016-06.io.spdk:cnode25641", 00:16:28.372 "tgt_name": "foobar", 00:16:28.372 "method": "nvmf_create_subsystem", 00:16:28.372 "req_id": 1 00:16:28.372 } 00:16:28.372 Got JSON-RPC error response 00:16:28.372 response: 00:16:28.372 { 00:16:28.372 "code": -32603, 00:16:28.372 "message": "Unable to find target foobar" 00:16:28.372 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:28.372 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:28.372 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode938 00:16:28.656 [2024-11-19 02:57:39.195397] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode938: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:28.656 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:28.656 { 00:16:28.656 "nqn": "nqn.2016-06.io.spdk:cnode938", 00:16:28.656 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:28.656 "method": "nvmf_create_subsystem", 00:16:28.656 "req_id": 1 00:16:28.656 } 00:16:28.656 Got JSON-RPC error response 00:16:28.656 response: 00:16:28.656 { 00:16:28.656 "code": -32602, 00:16:28.656 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:28.656 }' 00:16:28.656 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:28.656 { 00:16:28.656 "nqn": "nqn.2016-06.io.spdk:cnode938", 00:16:28.656 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:28.656 "method": "nvmf_create_subsystem", 00:16:28.656 "req_id": 1 00:16:28.656 } 00:16:28.656 Got JSON-RPC error response 00:16:28.656 response: 00:16:28.656 { 00:16:28.656 "code": -32602, 00:16:28.656 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:28.656 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:28.656 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:28.656 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6374 00:16:28.940 [2024-11-19 02:57:39.464344] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6374: invalid model number 'SPDK_Controller' 00:16:28.940 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:28.940 { 00:16:28.940 "nqn": "nqn.2016-06.io.spdk:cnode6374", 00:16:28.940 "model_number": "SPDK_Controller\u001f", 00:16:28.940 "method": "nvmf_create_subsystem", 00:16:28.940 "req_id": 1 00:16:28.940 } 00:16:28.940 Got JSON-RPC error response 00:16:28.940 response: 00:16:28.940 { 00:16:28.940 "code": -32602, 00:16:28.940 "message": "Invalid MN SPDK_Controller\u001f" 00:16:28.940 }' 00:16:28.940 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:28.940 { 00:16:28.940 "nqn": "nqn.2016-06.io.spdk:cnode6374", 00:16:28.940 "model_number": "SPDK_Controller\u001f", 00:16:28.940 "method": "nvmf_create_subsystem", 00:16:28.940 "req_id": 1 00:16:28.940 } 00:16:28.940 Got JSON-RPC error response 00:16:28.940 response: 00:16:28.940 { 00:16:28.940 "code": -32602, 00:16:28.940 "message": "Invalid MN SPDK_Controller\u001f" 00:16:28.940 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:28.940 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:28.940 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:28.941 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ O == \- ]] 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'OnK_ss8$|]l^qG'\''`7[[C`' 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'OnK_ss8$|]l^qG'\''`7[[C`' nqn.2016-06.io.spdk:cnode31997 00:16:29.225 [2024-11-19 02:57:39.793400] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31997: invalid serial number 'OnK_ss8$|]l^qG'`7[[C`' 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:29.225 { 00:16:29.225 "nqn": "nqn.2016-06.io.spdk:cnode31997", 00:16:29.225 "serial_number": "OnK_ss8$|]l^qG'\''`7[[C`", 00:16:29.225 "method": "nvmf_create_subsystem", 00:16:29.225 "req_id": 1 00:16:29.225 } 00:16:29.225 Got JSON-RPC error response 00:16:29.225 response: 00:16:29.225 { 00:16:29.225 "code": -32602, 00:16:29.225 "message": "Invalid SN OnK_ss8$|]l^qG'\''`7[[C`" 00:16:29.225 }' 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:29.225 { 00:16:29.225 "nqn": "nqn.2016-06.io.spdk:cnode31997", 00:16:29.225 "serial_number": "OnK_ss8$|]l^qG'`7[[C`", 00:16:29.225 "method": "nvmf_create_subsystem", 00:16:29.225 "req_id": 1 00:16:29.225 } 00:16:29.225 Got JSON-RPC error response 00:16:29.225 response: 00:16:29.225 { 00:16:29.225 "code": -32602, 00:16:29.225 "message": "Invalid SN OnK_ss8$|]l^qG'`7[[C`" 00:16:29.225 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:16:29.225 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.503 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '$a1twI; _),<#UfODo0 \6/1ejj1_B@V~n1`Su%' 00:16:29.504 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '$a1twI; _),<#UfODo0 \6/1ejj1_B@V~n1`Su%' nqn.2016-06.io.spdk:cnode23753 00:16:29.796 [2024-11-19 02:57:40.226835] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23753: invalid model number '$a1twI; _),<#UfODo0 \6/1ejj1_B@V~n1`Su%' 00:16:29.796 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:29.796 { 00:16:29.796 "nqn": "nqn.2016-06.io.spdk:cnode23753", 00:16:29.796 "model_number": "$a1twI; _),<#UfO\u007fDo0 \\6/1ejj1_B@V~n1\u007f`Su%", 00:16:29.796 "method": "nvmf_create_subsystem", 00:16:29.796 "req_id": 1 00:16:29.796 } 00:16:29.796 Got JSON-RPC error response 00:16:29.796 response: 00:16:29.796 { 00:16:29.796 "code": -32602, 00:16:29.796 "message": "Invalid MN $a1twI; _),<#UfO\u007fDo0 \\6/1ejj1_B@V~n1\u007f`Su%" 00:16:29.796 }' 00:16:29.796 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:29.796 { 00:16:29.796 "nqn": "nqn.2016-06.io.spdk:cnode23753", 00:16:29.796 "model_number": "$a1twI; _),<#UfO\u007fDo0 \\6/1ejj1_B@V~n1\u007f`Su%", 00:16:29.796 "method": "nvmf_create_subsystem", 00:16:29.796 "req_id": 1 00:16:29.796 } 00:16:29.796 Got JSON-RPC error response 00:16:29.796 response: 00:16:29.796 { 00:16:29.796 "code": -32602, 00:16:29.796 "message": "Invalid MN $a1twI; _),<#UfO\u007fDo0 \\6/1ejj1_B@V~n1\u007f`Su%" 00:16:29.796 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:29.796 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:30.103 [2024-11-19 02:57:40.491795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.103 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:30.390 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:30.390 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:30.390 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:30.390 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:30.390 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:30.697 [2024-11-19 02:57:41.041532] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:30.698 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:30.698 { 00:16:30.698 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:30.698 "listen_address": { 00:16:30.698 "trtype": "tcp", 00:16:30.698 "traddr": "", 00:16:30.698 "trsvcid": "4421" 00:16:30.698 }, 00:16:30.698 "method": "nvmf_subsystem_remove_listener", 00:16:30.698 "req_id": 1 00:16:30.698 } 00:16:30.698 Got JSON-RPC error response 00:16:30.698 response: 00:16:30.698 { 00:16:30.698 "code": -32602, 00:16:30.698 "message": "Invalid parameters" 00:16:30.698 }' 00:16:30.698 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:30.698 { 00:16:30.698 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:30.698 "listen_address": { 00:16:30.698 "trtype": "tcp", 00:16:30.698 "traddr": "", 00:16:30.698 "trsvcid": "4421" 00:16:30.698 }, 00:16:30.698 "method": "nvmf_subsystem_remove_listener", 00:16:30.698 "req_id": 1 00:16:30.698 } 00:16:30.698 Got JSON-RPC error response 00:16:30.698 response: 00:16:30.698 { 00:16:30.698 "code": -32602, 00:16:30.698 "message": "Invalid parameters" 00:16:30.698 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:30.698 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6828 -i 0 00:16:30.990 [2024-11-19 02:57:41.322429] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6828: invalid cntlid range [0-65519] 00:16:30.990 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:30.990 { 00:16:30.990 "nqn": "nqn.2016-06.io.spdk:cnode6828", 00:16:30.990 "min_cntlid": 0, 00:16:30.990 "method": "nvmf_create_subsystem", 00:16:30.990 "req_id": 1 00:16:30.990 } 00:16:30.990 Got JSON-RPC error response 00:16:30.990 response: 00:16:30.990 { 00:16:30.990 "code": -32602, 00:16:30.990 "message": "Invalid cntlid range [0-65519]" 00:16:30.990 }' 00:16:30.990 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:30.990 { 00:16:30.990 "nqn": "nqn.2016-06.io.spdk:cnode6828", 00:16:30.990 "min_cntlid": 0, 00:16:30.990 "method": "nvmf_create_subsystem", 00:16:30.990 "req_id": 1 00:16:30.990 } 00:16:30.990 Got JSON-RPC error response 00:16:30.990 response: 00:16:30.990 { 00:16:30.990 "code": -32602, 00:16:30.990 "message": "Invalid cntlid range [0-65519]" 00:16:30.990 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:30.990 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2046 -i 65520 00:16:30.991 [2024-11-19 02:57:41.587272] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2046: invalid cntlid range [65520-65519] 00:16:31.252 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:31.252 { 00:16:31.252 "nqn": "nqn.2016-06.io.spdk:cnode2046", 00:16:31.252 "min_cntlid": 65520, 00:16:31.252 "method": "nvmf_create_subsystem", 00:16:31.252 "req_id": 1 00:16:31.252 } 00:16:31.252 Got JSON-RPC error response 00:16:31.252 response: 00:16:31.252 { 00:16:31.252 "code": -32602, 00:16:31.252 "message": "Invalid cntlid range [65520-65519]" 00:16:31.252 }' 00:16:31.252 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:31.252 { 00:16:31.252 "nqn": "nqn.2016-06.io.spdk:cnode2046", 00:16:31.252 "min_cntlid": 65520, 00:16:31.252 "method": "nvmf_create_subsystem", 00:16:31.252 "req_id": 1 00:16:31.252 } 00:16:31.252 Got JSON-RPC error response 00:16:31.252 response: 00:16:31.252 { 00:16:31.252 "code": -32602, 00:16:31.252 "message": "Invalid cntlid range [65520-65519]" 00:16:31.252 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:31.252 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11562 -I 0 00:16:31.252 [2024-11-19 02:57:41.856138] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11562: invalid cntlid range [1-0] 00:16:31.510 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:31.510 { 00:16:31.510 "nqn": "nqn.2016-06.io.spdk:cnode11562", 00:16:31.510 "max_cntlid": 0, 00:16:31.510 "method": "nvmf_create_subsystem", 00:16:31.510 "req_id": 1 00:16:31.510 } 00:16:31.510 Got JSON-RPC error response 00:16:31.510 response: 00:16:31.510 { 00:16:31.510 "code": -32602, 00:16:31.510 "message": "Invalid cntlid range [1-0]" 00:16:31.510 }' 00:16:31.510 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:31.510 { 00:16:31.510 "nqn": "nqn.2016-06.io.spdk:cnode11562", 00:16:31.510 "max_cntlid": 0, 00:16:31.510 "method": "nvmf_create_subsystem", 00:16:31.510 "req_id": 1 00:16:31.510 } 00:16:31.510 Got JSON-RPC error response 00:16:31.510 response: 00:16:31.510 { 00:16:31.510 "code": -32602, 00:16:31.510 "message": "Invalid cntlid range [1-0]" 00:16:31.510 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:31.510 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2327 -I 65520 00:16:31.510 [2024-11-19 02:57:42.121034] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2327: invalid cntlid range [1-65520] 00:16:31.767 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:31.767 { 00:16:31.767 "nqn": "nqn.2016-06.io.spdk:cnode2327", 00:16:31.767 "max_cntlid": 65520, 00:16:31.767 "method": "nvmf_create_subsystem", 00:16:31.767 "req_id": 1 00:16:31.767 } 00:16:31.767 Got JSON-RPC error response 00:16:31.767 response: 00:16:31.767 { 00:16:31.767 "code": -32602, 00:16:31.767 "message": "Invalid cntlid range [1-65520]" 00:16:31.767 }' 00:16:31.767 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:31.767 { 00:16:31.767 "nqn": "nqn.2016-06.io.spdk:cnode2327", 00:16:31.767 "max_cntlid": 65520, 00:16:31.767 "method": "nvmf_create_subsystem", 00:16:31.767 "req_id": 1 00:16:31.767 } 00:16:31.767 Got JSON-RPC error response 00:16:31.767 response: 00:16:31.767 { 00:16:31.767 "code": -32602, 00:16:31.767 "message": "Invalid cntlid range [1-65520]" 00:16:31.767 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:31.767 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29006 -i 6 -I 5 00:16:32.026 [2024-11-19 02:57:42.409998] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29006: invalid cntlid range [6-5] 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:32.026 { 00:16:32.026 "nqn": "nqn.2016-06.io.spdk:cnode29006", 00:16:32.026 "min_cntlid": 6, 00:16:32.026 "max_cntlid": 5, 00:16:32.026 "method": "nvmf_create_subsystem", 00:16:32.026 "req_id": 1 00:16:32.026 } 00:16:32.026 Got JSON-RPC error response 00:16:32.026 response: 00:16:32.026 { 00:16:32.026 "code": -32602, 00:16:32.026 "message": "Invalid cntlid range [6-5]" 00:16:32.026 }' 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:32.026 { 00:16:32.026 "nqn": "nqn.2016-06.io.spdk:cnode29006", 00:16:32.026 "min_cntlid": 6, 00:16:32.026 "max_cntlid": 5, 00:16:32.026 "method": "nvmf_create_subsystem", 00:16:32.026 "req_id": 1 00:16:32.026 } 00:16:32.026 Got JSON-RPC error response 00:16:32.026 response: 00:16:32.026 { 00:16:32.026 "code": -32602, 00:16:32.026 "message": "Invalid cntlid range [6-5]" 00:16:32.026 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:32.026 { 00:16:32.026 "name": "foobar", 00:16:32.026 "method": "nvmf_delete_target", 00:16:32.026 "req_id": 1 00:16:32.026 } 00:16:32.026 Got JSON-RPC error response 00:16:32.026 response: 00:16:32.026 { 00:16:32.026 "code": -32602, 00:16:32.026 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:32.026 }' 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:32.026 { 00:16:32.026 "name": "foobar", 00:16:32.026 "method": "nvmf_delete_target", 00:16:32.026 "req_id": 1 00:16:32.026 } 00:16:32.026 Got JSON-RPC error response 00:16:32.026 response: 00:16:32.026 { 00:16:32.026 "code": -32602, 00:16:32.026 "message": "The specified target doesn't exist, cannot delete it." 00:16:32.026 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:32.026 rmmod nvme_tcp 00:16:32.026 rmmod nvme_fabrics 00:16:32.026 rmmod nvme_keyring 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 214480 ']' 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 214480 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 214480 ']' 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 214480 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 214480 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 214480' 00:16:32.026 killing process with pid 214480 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 214480 00:16:32.026 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 214480 00:16:32.286 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:32.286 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:32.286 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:32.286 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:32.286 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:16:32.286 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:32.286 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:16:32.286 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:32.286 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:32.286 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.286 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:32.286 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.827 02:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:34.827 00:16:34.827 real 0m9.114s 00:16:34.827 user 0m21.783s 00:16:34.827 sys 0m2.568s 00:16:34.827 02:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.827 02:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:34.827 ************************************ 00:16:34.827 END TEST nvmf_invalid 00:16:34.827 ************************************ 00:16:34.827 02:57:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:34.827 02:57:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:34.827 02:57:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.827 02:57:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:34.827 ************************************ 00:16:34.827 START TEST nvmf_connect_stress 00:16:34.827 ************************************ 00:16:34.827 02:57:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:34.827 * Looking for test storage... 00:16:34.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.827 02:57:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:34.827 02:57:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:16:34.827 02:57:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:34.827 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:34.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.828 --rc genhtml_branch_coverage=1 00:16:34.828 --rc genhtml_function_coverage=1 00:16:34.828 --rc genhtml_legend=1 00:16:34.828 --rc geninfo_all_blocks=1 00:16:34.828 --rc geninfo_unexecuted_blocks=1 00:16:34.828 00:16:34.828 ' 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:34.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.828 --rc genhtml_branch_coverage=1 00:16:34.828 --rc genhtml_function_coverage=1 00:16:34.828 --rc genhtml_legend=1 00:16:34.828 --rc geninfo_all_blocks=1 00:16:34.828 --rc geninfo_unexecuted_blocks=1 00:16:34.828 00:16:34.828 ' 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:34.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.828 --rc genhtml_branch_coverage=1 00:16:34.828 --rc genhtml_function_coverage=1 00:16:34.828 --rc genhtml_legend=1 00:16:34.828 --rc geninfo_all_blocks=1 00:16:34.828 --rc geninfo_unexecuted_blocks=1 00:16:34.828 00:16:34.828 ' 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:34.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.828 --rc genhtml_branch_coverage=1 00:16:34.828 --rc genhtml_function_coverage=1 00:16:34.828 --rc genhtml_legend=1 00:16:34.828 --rc geninfo_all_blocks=1 00:16:34.828 --rc geninfo_unexecuted_blocks=1 00:16:34.828 00:16:34.828 ' 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.828 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.829 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:34.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:34.829 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:34.829 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:34.829 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:34.829 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:34.829 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:34.829 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.829 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:34.829 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:34.829 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:34.829 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.829 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.829 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.829 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:34.829 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:34.829 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:34.829 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:36.733 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.733 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:36.733 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:36.733 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:36.733 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:36.733 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:36.734 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:36.734 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:36.734 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:36.734 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.734 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.993 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.993 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:36.993 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:36.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:16:36.993 00:16:36.993 --- 10.0.0.2 ping statistics --- 00:16:36.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.993 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:16:36.993 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:16:36.993 00:16:36.993 --- 10.0.0.1 ping statistics --- 00:16:36.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.993 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:16:36.993 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.993 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:36.993 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:36.993 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.993 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:36.994 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:36.994 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.994 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:36.994 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:36.994 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:36.994 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:36.994 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.994 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:36.994 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=217147 00:16:36.994 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 217147 00:16:36.994 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 217147 ']' 00:16:36.994 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:36.994 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.994 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.994 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.994 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.994 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:36.994 [2024-11-19 02:57:47.441832] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:16:36.994 [2024-11-19 02:57:47.441915] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.994 [2024-11-19 02:57:47.517060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:36.994 [2024-11-19 02:57:47.563320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.994 [2024-11-19 02:57:47.563384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.994 [2024-11-19 02:57:47.563397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.994 [2024-11-19 02:57:47.563408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.994 [2024-11-19 02:57:47.563417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.994 [2024-11-19 02:57:47.564895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.994 [2024-11-19 02:57:47.564962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:36.994 [2024-11-19 02:57:47.564967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.252 [2024-11-19 02:57:47.717312] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.252 [2024-11-19 02:57:47.734469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.252 NULL1 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=217177 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.252 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.253 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.511 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.511 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:37.511 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.511 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.511 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.076 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.076 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:38.076 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.076 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.076 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.334 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.335 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:38.335 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.335 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.335 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.592 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.592 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:38.592 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.592 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.592 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.851 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.851 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:38.851 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.851 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.851 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:39.109 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.109 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:39.109 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:39.109 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.109 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:39.674 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.674 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:39.674 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:39.674 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.674 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:39.931 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.931 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:39.932 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:39.932 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.932 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:40.189 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.189 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:40.189 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:40.189 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.189 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:40.447 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.447 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:40.447 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:40.447 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.447 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.013 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.013 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:41.013 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:41.013 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.013 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.271 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.271 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:41.271 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:41.271 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.271 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.529 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.529 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:41.529 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:41.529 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.529 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.787 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.787 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:41.787 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:41.787 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.787 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:42.045 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.045 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:42.045 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.045 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.045 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:42.611 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.611 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:42.611 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.611 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.611 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:42.869 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.869 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:42.869 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.869 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.869 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.127 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.127 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:43.127 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.127 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.127 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.385 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.385 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:43.385 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.385 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.385 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.643 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.643 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:43.643 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.643 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.643 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.210 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.210 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:44.210 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.210 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.210 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.468 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.468 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:44.468 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.468 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.468 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.727 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.727 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:44.727 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.727 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.727 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.985 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.985 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:44.985 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.985 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.985 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.243 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.243 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:45.243 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.243 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.243 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.809 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.809 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:45.809 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.809 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.809 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.066 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.066 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:46.066 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.066 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.067 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.324 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.324 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:46.324 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.324 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.324 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.582 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.583 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:46.583 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.583 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.583 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.841 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.841 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:46.841 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.841 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.841 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.407 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.407 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:47.407 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.407 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.407 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.407 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217177 00:16:47.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (217177) - No such process 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 217177 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:47.666 rmmod nvme_tcp 00:16:47.666 rmmod nvme_fabrics 00:16:47.666 rmmod nvme_keyring 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 217147 ']' 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 217147 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 217147 ']' 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 217147 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217147 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217147' 00:16:47.666 killing process with pid 217147 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 217147 00:16:47.666 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 217147 00:16:47.927 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:47.927 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:47.927 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:47.927 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:47.927 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:16:47.927 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:47.927 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:16:47.927 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:47.927 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:47.927 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.927 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:47.927 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.832 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:49.832 00:16:49.832 real 0m15.492s 00:16:49.832 user 0m40.104s 00:16:49.832 sys 0m4.622s 00:16:49.832 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.832 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.832 ************************************ 00:16:49.832 END TEST nvmf_connect_stress 00:16:49.832 ************************************ 00:16:49.832 02:58:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:49.832 02:58:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:49.832 02:58:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.832 02:58:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:50.090 ************************************ 00:16:50.090 START TEST nvmf_fused_ordering 00:16:50.090 ************************************ 00:16:50.090 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:50.090 * Looking for test storage... 00:16:50.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.090 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:50.090 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:16:50.090 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:50.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.091 --rc genhtml_branch_coverage=1 00:16:50.091 --rc genhtml_function_coverage=1 00:16:50.091 --rc genhtml_legend=1 00:16:50.091 --rc geninfo_all_blocks=1 00:16:50.091 --rc geninfo_unexecuted_blocks=1 00:16:50.091 00:16:50.091 ' 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:50.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.091 --rc genhtml_branch_coverage=1 00:16:50.091 --rc genhtml_function_coverage=1 00:16:50.091 --rc genhtml_legend=1 00:16:50.091 --rc geninfo_all_blocks=1 00:16:50.091 --rc geninfo_unexecuted_blocks=1 00:16:50.091 00:16:50.091 ' 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:50.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.091 --rc genhtml_branch_coverage=1 00:16:50.091 --rc genhtml_function_coverage=1 00:16:50.091 --rc genhtml_legend=1 00:16:50.091 --rc geninfo_all_blocks=1 00:16:50.091 --rc geninfo_unexecuted_blocks=1 00:16:50.091 00:16:50.091 ' 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:50.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.091 --rc genhtml_branch_coverage=1 00:16:50.091 --rc genhtml_function_coverage=1 00:16:50.091 --rc genhtml_legend=1 00:16:50.091 --rc geninfo_all_blocks=1 00:16:50.091 --rc geninfo_unexecuted_blocks=1 00:16:50.091 00:16:50.091 ' 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:50.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:50.091 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:50.092 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:50.092 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.092 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:50.092 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:50.092 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:50.092 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.092 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.092 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.092 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:50.092 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:50.092 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:16:50.092 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:52.625 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:52.625 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:16:52.625 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:52.625 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:52.625 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:52.625 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:52.625 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:52.625 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:16:52.625 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:52.625 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:16:52.625 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:16:52.625 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:16:52.625 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:16:52.625 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:16:52.625 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:16:52.625 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:52.626 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:52.626 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:52.626 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:52.626 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:52.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:16:52.626 00:16:52.626 --- 10.0.0.2 ping statistics --- 00:16:52.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.626 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:16:52.626 00:16:52.626 --- 10.0.0.1 ping statistics --- 00:16:52.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.626 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:52.626 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:52.627 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:52.627 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:52.627 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=220439 00:16:52.627 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:52.627 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 220439 00:16:52.627 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 220439 ']' 00:16:52.627 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.627 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.627 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.627 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.627 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:52.627 [2024-11-19 02:58:02.983686] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:16:52.627 [2024-11-19 02:58:02.983784] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.627 [2024-11-19 02:58:03.055555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.627 [2024-11-19 02:58:03.101967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.627 [2024-11-19 02:58:03.102042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.627 [2024-11-19 02:58:03.102054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.627 [2024-11-19 02:58:03.102065] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.627 [2024-11-19 02:58:03.102074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.627 [2024-11-19 02:58:03.102644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.627 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.627 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:16:52.627 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:52.627 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:52.627 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:52.627 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.627 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:52.627 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:52.886 [2024-11-19 02:58:03.246557] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:52.886 [2024-11-19 02:58:03.262802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:52.886 NULL1 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.886 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:52.886 [2024-11-19 02:58:03.306480] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:16:52.886 [2024-11-19 02:58:03.306515] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid220465 ] 00:16:53.453 Attached to nqn.2016-06.io.spdk:cnode1 00:16:53.453 Namespace ID: 1 size: 1GB 00:16:53.453 fused_ordering(0) 00:16:53.453 fused_ordering(1) 00:16:53.453 fused_ordering(2) 00:16:53.453 fused_ordering(3) 00:16:53.453 fused_ordering(4) 00:16:53.453 fused_ordering(5) 00:16:53.453 fused_ordering(6) 00:16:53.453 fused_ordering(7) 00:16:53.453 fused_ordering(8) 00:16:53.453 fused_ordering(9) 00:16:53.453 fused_ordering(10) 00:16:53.453 fused_ordering(11) 00:16:53.453 fused_ordering(12) 00:16:53.453 fused_ordering(13) 00:16:53.453 fused_ordering(14) 00:16:53.453 fused_ordering(15) 00:16:53.453 fused_ordering(16) 00:16:53.453 fused_ordering(17) 00:16:53.453 fused_ordering(18) 00:16:53.453 fused_ordering(19) 00:16:53.453 fused_ordering(20) 00:16:53.453 fused_ordering(21) 00:16:53.453 fused_ordering(22) 00:16:53.453 fused_ordering(23) 00:16:53.453 fused_ordering(24) 00:16:53.453 fused_ordering(25) 00:16:53.453 fused_ordering(26) 00:16:53.453 fused_ordering(27) 00:16:53.453 fused_ordering(28) 00:16:53.453 fused_ordering(29) 00:16:53.453 fused_ordering(30) 00:16:53.453 fused_ordering(31) 00:16:53.453 fused_ordering(32) 00:16:53.453 fused_ordering(33) 00:16:53.453 fused_ordering(34) 00:16:53.453 fused_ordering(35) 00:16:53.453 fused_ordering(36) 00:16:53.453 fused_ordering(37) 00:16:53.453 fused_ordering(38) 00:16:53.453 fused_ordering(39) 00:16:53.453 fused_ordering(40) 00:16:53.453 fused_ordering(41) 00:16:53.453 fused_ordering(42) 00:16:53.453 fused_ordering(43) 00:16:53.453 fused_ordering(44) 00:16:53.453 fused_ordering(45) 00:16:53.453 fused_ordering(46) 00:16:53.453 fused_ordering(47) 00:16:53.453 fused_ordering(48) 00:16:53.453 fused_ordering(49) 00:16:53.453 fused_ordering(50) 00:16:53.453 fused_ordering(51) 00:16:53.453 fused_ordering(52) 00:16:53.453 fused_ordering(53) 00:16:53.453 fused_ordering(54) 00:16:53.453 fused_ordering(55) 00:16:53.453 fused_ordering(56) 00:16:53.453 fused_ordering(57) 00:16:53.453 fused_ordering(58) 00:16:53.453 fused_ordering(59) 00:16:53.453 fused_ordering(60) 00:16:53.453 fused_ordering(61) 00:16:53.453 fused_ordering(62) 00:16:53.453 fused_ordering(63) 00:16:53.453 fused_ordering(64) 00:16:53.453 fused_ordering(65) 00:16:53.453 fused_ordering(66) 00:16:53.453 fused_ordering(67) 00:16:53.453 fused_ordering(68) 00:16:53.453 fused_ordering(69) 00:16:53.453 fused_ordering(70) 00:16:53.453 fused_ordering(71) 00:16:53.453 fused_ordering(72) 00:16:53.453 fused_ordering(73) 00:16:53.453 fused_ordering(74) 00:16:53.453 fused_ordering(75) 00:16:53.453 fused_ordering(76) 00:16:53.453 fused_ordering(77) 00:16:53.453 fused_ordering(78) 00:16:53.453 fused_ordering(79) 00:16:53.453 fused_ordering(80) 00:16:53.453 fused_ordering(81) 00:16:53.453 fused_ordering(82) 00:16:53.453 fused_ordering(83) 00:16:53.453 fused_ordering(84) 00:16:53.453 fused_ordering(85) 00:16:53.453 fused_ordering(86) 00:16:53.453 fused_ordering(87) 00:16:53.453 fused_ordering(88) 00:16:53.453 fused_ordering(89) 00:16:53.453 fused_ordering(90) 00:16:53.453 fused_ordering(91) 00:16:53.453 fused_ordering(92) 00:16:53.453 fused_ordering(93) 00:16:53.453 fused_ordering(94) 00:16:53.453 fused_ordering(95) 00:16:53.453 fused_ordering(96) 00:16:53.453 fused_ordering(97) 00:16:53.453 fused_ordering(98) 00:16:53.453 fused_ordering(99) 00:16:53.453 fused_ordering(100) 00:16:53.453 fused_ordering(101) 00:16:53.453 fused_ordering(102) 00:16:53.453 fused_ordering(103) 00:16:53.453 fused_ordering(104) 00:16:53.453 fused_ordering(105) 00:16:53.453 fused_ordering(106) 00:16:53.453 fused_ordering(107) 00:16:53.453 fused_ordering(108) 00:16:53.453 fused_ordering(109) 00:16:53.453 fused_ordering(110) 00:16:53.453 fused_ordering(111) 00:16:53.453 fused_ordering(112) 00:16:53.453 fused_ordering(113) 00:16:53.453 fused_ordering(114) 00:16:53.453 fused_ordering(115) 00:16:53.453 fused_ordering(116) 00:16:53.453 fused_ordering(117) 00:16:53.453 fused_ordering(118) 00:16:53.453 fused_ordering(119) 00:16:53.453 fused_ordering(120) 00:16:53.453 fused_ordering(121) 00:16:53.453 fused_ordering(122) 00:16:53.453 fused_ordering(123) 00:16:53.453 fused_ordering(124) 00:16:53.453 fused_ordering(125) 00:16:53.453 fused_ordering(126) 00:16:53.453 fused_ordering(127) 00:16:53.453 fused_ordering(128) 00:16:53.453 fused_ordering(129) 00:16:53.453 fused_ordering(130) 00:16:53.453 fused_ordering(131) 00:16:53.453 fused_ordering(132) 00:16:53.453 fused_ordering(133) 00:16:53.453 fused_ordering(134) 00:16:53.453 fused_ordering(135) 00:16:53.453 fused_ordering(136) 00:16:53.453 fused_ordering(137) 00:16:53.453 fused_ordering(138) 00:16:53.453 fused_ordering(139) 00:16:53.453 fused_ordering(140) 00:16:53.453 fused_ordering(141) 00:16:53.453 fused_ordering(142) 00:16:53.453 fused_ordering(143) 00:16:53.453 fused_ordering(144) 00:16:53.453 fused_ordering(145) 00:16:53.453 fused_ordering(146) 00:16:53.453 fused_ordering(147) 00:16:53.453 fused_ordering(148) 00:16:53.453 fused_ordering(149) 00:16:53.453 fused_ordering(150) 00:16:53.453 fused_ordering(151) 00:16:53.453 fused_ordering(152) 00:16:53.453 fused_ordering(153) 00:16:53.453 fused_ordering(154) 00:16:53.453 fused_ordering(155) 00:16:53.453 fused_ordering(156) 00:16:53.453 fused_ordering(157) 00:16:53.453 fused_ordering(158) 00:16:53.453 fused_ordering(159) 00:16:53.453 fused_ordering(160) 00:16:53.453 fused_ordering(161) 00:16:53.453 fused_ordering(162) 00:16:53.453 fused_ordering(163) 00:16:53.453 fused_ordering(164) 00:16:53.453 fused_ordering(165) 00:16:53.453 fused_ordering(166) 00:16:53.453 fused_ordering(167) 00:16:53.453 fused_ordering(168) 00:16:53.453 fused_ordering(169) 00:16:53.453 fused_ordering(170) 00:16:53.453 fused_ordering(171) 00:16:53.453 fused_ordering(172) 00:16:53.453 fused_ordering(173) 00:16:53.453 fused_ordering(174) 00:16:53.453 fused_ordering(175) 00:16:53.453 fused_ordering(176) 00:16:53.453 fused_ordering(177) 00:16:53.453 fused_ordering(178) 00:16:53.453 fused_ordering(179) 00:16:53.453 fused_ordering(180) 00:16:53.453 fused_ordering(181) 00:16:53.453 fused_ordering(182) 00:16:53.453 fused_ordering(183) 00:16:53.453 fused_ordering(184) 00:16:53.453 fused_ordering(185) 00:16:53.453 fused_ordering(186) 00:16:53.453 fused_ordering(187) 00:16:53.453 fused_ordering(188) 00:16:53.453 fused_ordering(189) 00:16:53.453 fused_ordering(190) 00:16:53.453 fused_ordering(191) 00:16:53.453 fused_ordering(192) 00:16:53.453 fused_ordering(193) 00:16:53.453 fused_ordering(194) 00:16:53.453 fused_ordering(195) 00:16:53.453 fused_ordering(196) 00:16:53.453 fused_ordering(197) 00:16:53.453 fused_ordering(198) 00:16:53.453 fused_ordering(199) 00:16:53.453 fused_ordering(200) 00:16:53.453 fused_ordering(201) 00:16:53.453 fused_ordering(202) 00:16:53.453 fused_ordering(203) 00:16:53.453 fused_ordering(204) 00:16:53.453 fused_ordering(205) 00:16:53.712 fused_ordering(206) 00:16:53.712 fused_ordering(207) 00:16:53.712 fused_ordering(208) 00:16:53.712 fused_ordering(209) 00:16:53.712 fused_ordering(210) 00:16:53.712 fused_ordering(211) 00:16:53.712 fused_ordering(212) 00:16:53.712 fused_ordering(213) 00:16:53.712 fused_ordering(214) 00:16:53.712 fused_ordering(215) 00:16:53.712 fused_ordering(216) 00:16:53.712 fused_ordering(217) 00:16:53.712 fused_ordering(218) 00:16:53.712 fused_ordering(219) 00:16:53.712 fused_ordering(220) 00:16:53.712 fused_ordering(221) 00:16:53.712 fused_ordering(222) 00:16:53.712 fused_ordering(223) 00:16:53.712 fused_ordering(224) 00:16:53.712 fused_ordering(225) 00:16:53.712 fused_ordering(226) 00:16:53.712 fused_ordering(227) 00:16:53.712 fused_ordering(228) 00:16:53.712 fused_ordering(229) 00:16:53.712 fused_ordering(230) 00:16:53.712 fused_ordering(231) 00:16:53.712 fused_ordering(232) 00:16:53.712 fused_ordering(233) 00:16:53.712 fused_ordering(234) 00:16:53.712 fused_ordering(235) 00:16:53.712 fused_ordering(236) 00:16:53.712 fused_ordering(237) 00:16:53.712 fused_ordering(238) 00:16:53.712 fused_ordering(239) 00:16:53.712 fused_ordering(240) 00:16:53.712 fused_ordering(241) 00:16:53.712 fused_ordering(242) 00:16:53.712 fused_ordering(243) 00:16:53.712 fused_ordering(244) 00:16:53.712 fused_ordering(245) 00:16:53.712 fused_ordering(246) 00:16:53.712 fused_ordering(247) 00:16:53.712 fused_ordering(248) 00:16:53.712 fused_ordering(249) 00:16:53.712 fused_ordering(250) 00:16:53.712 fused_ordering(251) 00:16:53.712 fused_ordering(252) 00:16:53.712 fused_ordering(253) 00:16:53.712 fused_ordering(254) 00:16:53.712 fused_ordering(255) 00:16:53.712 fused_ordering(256) 00:16:53.712 fused_ordering(257) 00:16:53.712 fused_ordering(258) 00:16:53.712 fused_ordering(259) 00:16:53.712 fused_ordering(260) 00:16:53.712 fused_ordering(261) 00:16:53.712 fused_ordering(262) 00:16:53.712 fused_ordering(263) 00:16:53.712 fused_ordering(264) 00:16:53.712 fused_ordering(265) 00:16:53.712 fused_ordering(266) 00:16:53.712 fused_ordering(267) 00:16:53.712 fused_ordering(268) 00:16:53.712 fused_ordering(269) 00:16:53.712 fused_ordering(270) 00:16:53.712 fused_ordering(271) 00:16:53.712 fused_ordering(272) 00:16:53.712 fused_ordering(273) 00:16:53.712 fused_ordering(274) 00:16:53.712 fused_ordering(275) 00:16:53.712 fused_ordering(276) 00:16:53.712 fused_ordering(277) 00:16:53.712 fused_ordering(278) 00:16:53.712 fused_ordering(279) 00:16:53.712 fused_ordering(280) 00:16:53.712 fused_ordering(281) 00:16:53.712 fused_ordering(282) 00:16:53.712 fused_ordering(283) 00:16:53.712 fused_ordering(284) 00:16:53.712 fused_ordering(285) 00:16:53.712 fused_ordering(286) 00:16:53.712 fused_ordering(287) 00:16:53.712 fused_ordering(288) 00:16:53.712 fused_ordering(289) 00:16:53.712 fused_ordering(290) 00:16:53.712 fused_ordering(291) 00:16:53.712 fused_ordering(292) 00:16:53.712 fused_ordering(293) 00:16:53.712 fused_ordering(294) 00:16:53.712 fused_ordering(295) 00:16:53.712 fused_ordering(296) 00:16:53.712 fused_ordering(297) 00:16:53.712 fused_ordering(298) 00:16:53.712 fused_ordering(299) 00:16:53.712 fused_ordering(300) 00:16:53.712 fused_ordering(301) 00:16:53.712 fused_ordering(302) 00:16:53.712 fused_ordering(303) 00:16:53.712 fused_ordering(304) 00:16:53.712 fused_ordering(305) 00:16:53.712 fused_ordering(306) 00:16:53.712 fused_ordering(307) 00:16:53.712 fused_ordering(308) 00:16:53.712 fused_ordering(309) 00:16:53.712 fused_ordering(310) 00:16:53.712 fused_ordering(311) 00:16:53.712 fused_ordering(312) 00:16:53.712 fused_ordering(313) 00:16:53.712 fused_ordering(314) 00:16:53.712 fused_ordering(315) 00:16:53.712 fused_ordering(316) 00:16:53.712 fused_ordering(317) 00:16:53.712 fused_ordering(318) 00:16:53.712 fused_ordering(319) 00:16:53.712 fused_ordering(320) 00:16:53.712 fused_ordering(321) 00:16:53.712 fused_ordering(322) 00:16:53.712 fused_ordering(323) 00:16:53.712 fused_ordering(324) 00:16:53.712 fused_ordering(325) 00:16:53.712 fused_ordering(326) 00:16:53.712 fused_ordering(327) 00:16:53.712 fused_ordering(328) 00:16:53.712 fused_ordering(329) 00:16:53.712 fused_ordering(330) 00:16:53.712 fused_ordering(331) 00:16:53.712 fused_ordering(332) 00:16:53.712 fused_ordering(333) 00:16:53.712 fused_ordering(334) 00:16:53.712 fused_ordering(335) 00:16:53.712 fused_ordering(336) 00:16:53.712 fused_ordering(337) 00:16:53.712 fused_ordering(338) 00:16:53.712 fused_ordering(339) 00:16:53.712 fused_ordering(340) 00:16:53.712 fused_ordering(341) 00:16:53.712 fused_ordering(342) 00:16:53.712 fused_ordering(343) 00:16:53.712 fused_ordering(344) 00:16:53.712 fused_ordering(345) 00:16:53.712 fused_ordering(346) 00:16:53.712 fused_ordering(347) 00:16:53.712 fused_ordering(348) 00:16:53.712 fused_ordering(349) 00:16:53.712 fused_ordering(350) 00:16:53.712 fused_ordering(351) 00:16:53.712 fused_ordering(352) 00:16:53.712 fused_ordering(353) 00:16:53.712 fused_ordering(354) 00:16:53.712 fused_ordering(355) 00:16:53.712 fused_ordering(356) 00:16:53.712 fused_ordering(357) 00:16:53.712 fused_ordering(358) 00:16:53.712 fused_ordering(359) 00:16:53.712 fused_ordering(360) 00:16:53.712 fused_ordering(361) 00:16:53.712 fused_ordering(362) 00:16:53.712 fused_ordering(363) 00:16:53.712 fused_ordering(364) 00:16:53.712 fused_ordering(365) 00:16:53.712 fused_ordering(366) 00:16:53.712 fused_ordering(367) 00:16:53.712 fused_ordering(368) 00:16:53.712 fused_ordering(369) 00:16:53.712 fused_ordering(370) 00:16:53.712 fused_ordering(371) 00:16:53.712 fused_ordering(372) 00:16:53.712 fused_ordering(373) 00:16:53.712 fused_ordering(374) 00:16:53.712 fused_ordering(375) 00:16:53.712 fused_ordering(376) 00:16:53.712 fused_ordering(377) 00:16:53.712 fused_ordering(378) 00:16:53.712 fused_ordering(379) 00:16:53.712 fused_ordering(380) 00:16:53.712 fused_ordering(381) 00:16:53.712 fused_ordering(382) 00:16:53.712 fused_ordering(383) 00:16:53.712 fused_ordering(384) 00:16:53.712 fused_ordering(385) 00:16:53.712 fused_ordering(386) 00:16:53.712 fused_ordering(387) 00:16:53.712 fused_ordering(388) 00:16:53.712 fused_ordering(389) 00:16:53.712 fused_ordering(390) 00:16:53.712 fused_ordering(391) 00:16:53.712 fused_ordering(392) 00:16:53.712 fused_ordering(393) 00:16:53.712 fused_ordering(394) 00:16:53.712 fused_ordering(395) 00:16:53.712 fused_ordering(396) 00:16:53.712 fused_ordering(397) 00:16:53.712 fused_ordering(398) 00:16:53.712 fused_ordering(399) 00:16:53.712 fused_ordering(400) 00:16:53.712 fused_ordering(401) 00:16:53.712 fused_ordering(402) 00:16:53.712 fused_ordering(403) 00:16:53.712 fused_ordering(404) 00:16:53.712 fused_ordering(405) 00:16:53.712 fused_ordering(406) 00:16:53.712 fused_ordering(407) 00:16:53.712 fused_ordering(408) 00:16:53.712 fused_ordering(409) 00:16:53.712 fused_ordering(410) 00:16:53.970 fused_ordering(411) 00:16:53.970 fused_ordering(412) 00:16:53.970 fused_ordering(413) 00:16:53.970 fused_ordering(414) 00:16:53.970 fused_ordering(415) 00:16:53.970 fused_ordering(416) 00:16:53.970 fused_ordering(417) 00:16:53.970 fused_ordering(418) 00:16:53.970 fused_ordering(419) 00:16:53.970 fused_ordering(420) 00:16:53.970 fused_ordering(421) 00:16:53.970 fused_ordering(422) 00:16:53.970 fused_ordering(423) 00:16:53.970 fused_ordering(424) 00:16:53.970 fused_ordering(425) 00:16:53.970 fused_ordering(426) 00:16:53.970 fused_ordering(427) 00:16:53.970 fused_ordering(428) 00:16:53.970 fused_ordering(429) 00:16:53.970 fused_ordering(430) 00:16:53.970 fused_ordering(431) 00:16:53.970 fused_ordering(432) 00:16:53.970 fused_ordering(433) 00:16:53.970 fused_ordering(434) 00:16:53.970 fused_ordering(435) 00:16:53.970 fused_ordering(436) 00:16:53.970 fused_ordering(437) 00:16:53.970 fused_ordering(438) 00:16:53.970 fused_ordering(439) 00:16:53.970 fused_ordering(440) 00:16:53.970 fused_ordering(441) 00:16:53.970 fused_ordering(442) 00:16:53.970 fused_ordering(443) 00:16:53.970 fused_ordering(444) 00:16:53.970 fused_ordering(445) 00:16:53.970 fused_ordering(446) 00:16:53.970 fused_ordering(447) 00:16:53.970 fused_ordering(448) 00:16:53.970 fused_ordering(449) 00:16:53.970 fused_ordering(450) 00:16:53.970 fused_ordering(451) 00:16:53.970 fused_ordering(452) 00:16:53.970 fused_ordering(453) 00:16:53.970 fused_ordering(454) 00:16:53.970 fused_ordering(455) 00:16:53.970 fused_ordering(456) 00:16:53.970 fused_ordering(457) 00:16:53.970 fused_ordering(458) 00:16:53.970 fused_ordering(459) 00:16:53.970 fused_ordering(460) 00:16:53.970 fused_ordering(461) 00:16:53.970 fused_ordering(462) 00:16:53.970 fused_ordering(463) 00:16:53.970 fused_ordering(464) 00:16:53.970 fused_ordering(465) 00:16:53.970 fused_ordering(466) 00:16:53.970 fused_ordering(467) 00:16:53.970 fused_ordering(468) 00:16:53.970 fused_ordering(469) 00:16:53.970 fused_ordering(470) 00:16:53.970 fused_ordering(471) 00:16:53.970 fused_ordering(472) 00:16:53.970 fused_ordering(473) 00:16:53.970 fused_ordering(474) 00:16:53.970 fused_ordering(475) 00:16:53.970 fused_ordering(476) 00:16:53.970 fused_ordering(477) 00:16:53.970 fused_ordering(478) 00:16:53.970 fused_ordering(479) 00:16:53.970 fused_ordering(480) 00:16:53.970 fused_ordering(481) 00:16:53.970 fused_ordering(482) 00:16:53.970 fused_ordering(483) 00:16:53.970 fused_ordering(484) 00:16:53.970 fused_ordering(485) 00:16:53.970 fused_ordering(486) 00:16:53.970 fused_ordering(487) 00:16:53.970 fused_ordering(488) 00:16:53.970 fused_ordering(489) 00:16:53.970 fused_ordering(490) 00:16:53.970 fused_ordering(491) 00:16:53.970 fused_ordering(492) 00:16:53.970 fused_ordering(493) 00:16:53.970 fused_ordering(494) 00:16:53.970 fused_ordering(495) 00:16:53.970 fused_ordering(496) 00:16:53.970 fused_ordering(497) 00:16:53.970 fused_ordering(498) 00:16:53.970 fused_ordering(499) 00:16:53.970 fused_ordering(500) 00:16:53.970 fused_ordering(501) 00:16:53.970 fused_ordering(502) 00:16:53.970 fused_ordering(503) 00:16:53.970 fused_ordering(504) 00:16:53.970 fused_ordering(505) 00:16:53.970 fused_ordering(506) 00:16:53.971 fused_ordering(507) 00:16:53.971 fused_ordering(508) 00:16:53.971 fused_ordering(509) 00:16:53.971 fused_ordering(510) 00:16:53.971 fused_ordering(511) 00:16:53.971 fused_ordering(512) 00:16:53.971 fused_ordering(513) 00:16:53.971 fused_ordering(514) 00:16:53.971 fused_ordering(515) 00:16:53.971 fused_ordering(516) 00:16:53.971 fused_ordering(517) 00:16:53.971 fused_ordering(518) 00:16:53.971 fused_ordering(519) 00:16:53.971 fused_ordering(520) 00:16:53.971 fused_ordering(521) 00:16:53.971 fused_ordering(522) 00:16:53.971 fused_ordering(523) 00:16:53.971 fused_ordering(524) 00:16:53.971 fused_ordering(525) 00:16:53.971 fused_ordering(526) 00:16:53.971 fused_ordering(527) 00:16:53.971 fused_ordering(528) 00:16:53.971 fused_ordering(529) 00:16:53.971 fused_ordering(530) 00:16:53.971 fused_ordering(531) 00:16:53.971 fused_ordering(532) 00:16:53.971 fused_ordering(533) 00:16:53.971 fused_ordering(534) 00:16:53.971 fused_ordering(535) 00:16:53.971 fused_ordering(536) 00:16:53.971 fused_ordering(537) 00:16:53.971 fused_ordering(538) 00:16:53.971 fused_ordering(539) 00:16:53.971 fused_ordering(540) 00:16:53.971 fused_ordering(541) 00:16:53.971 fused_ordering(542) 00:16:53.971 fused_ordering(543) 00:16:53.971 fused_ordering(544) 00:16:53.971 fused_ordering(545) 00:16:53.971 fused_ordering(546) 00:16:53.971 fused_ordering(547) 00:16:53.971 fused_ordering(548) 00:16:53.971 fused_ordering(549) 00:16:53.971 fused_ordering(550) 00:16:53.971 fused_ordering(551) 00:16:53.971 fused_ordering(552) 00:16:53.971 fused_ordering(553) 00:16:53.971 fused_ordering(554) 00:16:53.971 fused_ordering(555) 00:16:53.971 fused_ordering(556) 00:16:53.971 fused_ordering(557) 00:16:53.971 fused_ordering(558) 00:16:53.971 fused_ordering(559) 00:16:53.971 fused_ordering(560) 00:16:53.971 fused_ordering(561) 00:16:53.971 fused_ordering(562) 00:16:53.971 fused_ordering(563) 00:16:53.971 fused_ordering(564) 00:16:53.971 fused_ordering(565) 00:16:53.971 fused_ordering(566) 00:16:53.971 fused_ordering(567) 00:16:53.971 fused_ordering(568) 00:16:53.971 fused_ordering(569) 00:16:53.971 fused_ordering(570) 00:16:53.971 fused_ordering(571) 00:16:53.971 fused_ordering(572) 00:16:53.971 fused_ordering(573) 00:16:53.971 fused_ordering(574) 00:16:53.971 fused_ordering(575) 00:16:53.971 fused_ordering(576) 00:16:53.971 fused_ordering(577) 00:16:53.971 fused_ordering(578) 00:16:53.971 fused_ordering(579) 00:16:53.971 fused_ordering(580) 00:16:53.971 fused_ordering(581) 00:16:53.971 fused_ordering(582) 00:16:53.971 fused_ordering(583) 00:16:53.971 fused_ordering(584) 00:16:53.971 fused_ordering(585) 00:16:53.971 fused_ordering(586) 00:16:53.971 fused_ordering(587) 00:16:53.971 fused_ordering(588) 00:16:53.971 fused_ordering(589) 00:16:53.971 fused_ordering(590) 00:16:53.971 fused_ordering(591) 00:16:53.971 fused_ordering(592) 00:16:53.971 fused_ordering(593) 00:16:53.971 fused_ordering(594) 00:16:53.971 fused_ordering(595) 00:16:53.971 fused_ordering(596) 00:16:53.971 fused_ordering(597) 00:16:53.971 fused_ordering(598) 00:16:53.971 fused_ordering(599) 00:16:53.971 fused_ordering(600) 00:16:53.971 fused_ordering(601) 00:16:53.971 fused_ordering(602) 00:16:53.971 fused_ordering(603) 00:16:53.971 fused_ordering(604) 00:16:53.971 fused_ordering(605) 00:16:53.971 fused_ordering(606) 00:16:53.971 fused_ordering(607) 00:16:53.971 fused_ordering(608) 00:16:53.971 fused_ordering(609) 00:16:53.971 fused_ordering(610) 00:16:53.971 fused_ordering(611) 00:16:53.971 fused_ordering(612) 00:16:53.971 fused_ordering(613) 00:16:53.971 fused_ordering(614) 00:16:53.971 fused_ordering(615) 00:16:54.536 fused_ordering(616) 00:16:54.536 fused_ordering(617) 00:16:54.536 fused_ordering(618) 00:16:54.536 fused_ordering(619) 00:16:54.536 fused_ordering(620) 00:16:54.536 fused_ordering(621) 00:16:54.536 fused_ordering(622) 00:16:54.536 fused_ordering(623) 00:16:54.536 fused_ordering(624) 00:16:54.536 fused_ordering(625) 00:16:54.536 fused_ordering(626) 00:16:54.536 fused_ordering(627) 00:16:54.536 fused_ordering(628) 00:16:54.536 fused_ordering(629) 00:16:54.536 fused_ordering(630) 00:16:54.536 fused_ordering(631) 00:16:54.536 fused_ordering(632) 00:16:54.536 fused_ordering(633) 00:16:54.536 fused_ordering(634) 00:16:54.536 fused_ordering(635) 00:16:54.536 fused_ordering(636) 00:16:54.536 fused_ordering(637) 00:16:54.536 fused_ordering(638) 00:16:54.536 fused_ordering(639) 00:16:54.536 fused_ordering(640) 00:16:54.536 fused_ordering(641) 00:16:54.536 fused_ordering(642) 00:16:54.536 fused_ordering(643) 00:16:54.536 fused_ordering(644) 00:16:54.536 fused_ordering(645) 00:16:54.536 fused_ordering(646) 00:16:54.536 fused_ordering(647) 00:16:54.536 fused_ordering(648) 00:16:54.536 fused_ordering(649) 00:16:54.536 fused_ordering(650) 00:16:54.536 fused_ordering(651) 00:16:54.536 fused_ordering(652) 00:16:54.536 fused_ordering(653) 00:16:54.536 fused_ordering(654) 00:16:54.536 fused_ordering(655) 00:16:54.536 fused_ordering(656) 00:16:54.536 fused_ordering(657) 00:16:54.536 fused_ordering(658) 00:16:54.536 fused_ordering(659) 00:16:54.536 fused_ordering(660) 00:16:54.536 fused_ordering(661) 00:16:54.536 fused_ordering(662) 00:16:54.536 fused_ordering(663) 00:16:54.536 fused_ordering(664) 00:16:54.536 fused_ordering(665) 00:16:54.536 fused_ordering(666) 00:16:54.536 fused_ordering(667) 00:16:54.536 fused_ordering(668) 00:16:54.536 fused_ordering(669) 00:16:54.536 fused_ordering(670) 00:16:54.536 fused_ordering(671) 00:16:54.536 fused_ordering(672) 00:16:54.536 fused_ordering(673) 00:16:54.536 fused_ordering(674) 00:16:54.536 fused_ordering(675) 00:16:54.536 fused_ordering(676) 00:16:54.536 fused_ordering(677) 00:16:54.536 fused_ordering(678) 00:16:54.536 fused_ordering(679) 00:16:54.536 fused_ordering(680) 00:16:54.536 fused_ordering(681) 00:16:54.536 fused_ordering(682) 00:16:54.536 fused_ordering(683) 00:16:54.536 fused_ordering(684) 00:16:54.536 fused_ordering(685) 00:16:54.536 fused_ordering(686) 00:16:54.536 fused_ordering(687) 00:16:54.536 fused_ordering(688) 00:16:54.536 fused_ordering(689) 00:16:54.536 fused_ordering(690) 00:16:54.536 fused_ordering(691) 00:16:54.536 fused_ordering(692) 00:16:54.536 fused_ordering(693) 00:16:54.536 fused_ordering(694) 00:16:54.536 fused_ordering(695) 00:16:54.536 fused_ordering(696) 00:16:54.536 fused_ordering(697) 00:16:54.536 fused_ordering(698) 00:16:54.536 fused_ordering(699) 00:16:54.536 fused_ordering(700) 00:16:54.536 fused_ordering(701) 00:16:54.536 fused_ordering(702) 00:16:54.536 fused_ordering(703) 00:16:54.536 fused_ordering(704) 00:16:54.536 fused_ordering(705) 00:16:54.536 fused_ordering(706) 00:16:54.536 fused_ordering(707) 00:16:54.536 fused_ordering(708) 00:16:54.536 fused_ordering(709) 00:16:54.536 fused_ordering(710) 00:16:54.536 fused_ordering(711) 00:16:54.536 fused_ordering(712) 00:16:54.536 fused_ordering(713) 00:16:54.536 fused_ordering(714) 00:16:54.536 fused_ordering(715) 00:16:54.536 fused_ordering(716) 00:16:54.536 fused_ordering(717) 00:16:54.536 fused_ordering(718) 00:16:54.536 fused_ordering(719) 00:16:54.536 fused_ordering(720) 00:16:54.536 fused_ordering(721) 00:16:54.536 fused_ordering(722) 00:16:54.536 fused_ordering(723) 00:16:54.536 fused_ordering(724) 00:16:54.536 fused_ordering(725) 00:16:54.536 fused_ordering(726) 00:16:54.536 fused_ordering(727) 00:16:54.536 fused_ordering(728) 00:16:54.536 fused_ordering(729) 00:16:54.536 fused_ordering(730) 00:16:54.536 fused_ordering(731) 00:16:54.536 fused_ordering(732) 00:16:54.536 fused_ordering(733) 00:16:54.536 fused_ordering(734) 00:16:54.536 fused_ordering(735) 00:16:54.536 fused_ordering(736) 00:16:54.536 fused_ordering(737) 00:16:54.536 fused_ordering(738) 00:16:54.536 fused_ordering(739) 00:16:54.536 fused_ordering(740) 00:16:54.536 fused_ordering(741) 00:16:54.536 fused_ordering(742) 00:16:54.536 fused_ordering(743) 00:16:54.536 fused_ordering(744) 00:16:54.536 fused_ordering(745) 00:16:54.536 fused_ordering(746) 00:16:54.536 fused_ordering(747) 00:16:54.536 fused_ordering(748) 00:16:54.536 fused_ordering(749) 00:16:54.536 fused_ordering(750) 00:16:54.536 fused_ordering(751) 00:16:54.536 fused_ordering(752) 00:16:54.536 fused_ordering(753) 00:16:54.536 fused_ordering(754) 00:16:54.536 fused_ordering(755) 00:16:54.536 fused_ordering(756) 00:16:54.536 fused_ordering(757) 00:16:54.536 fused_ordering(758) 00:16:54.536 fused_ordering(759) 00:16:54.536 fused_ordering(760) 00:16:54.536 fused_ordering(761) 00:16:54.536 fused_ordering(762) 00:16:54.536 fused_ordering(763) 00:16:54.536 fused_ordering(764) 00:16:54.536 fused_ordering(765) 00:16:54.536 fused_ordering(766) 00:16:54.536 fused_ordering(767) 00:16:54.536 fused_ordering(768) 00:16:54.536 fused_ordering(769) 00:16:54.536 fused_ordering(770) 00:16:54.536 fused_ordering(771) 00:16:54.536 fused_ordering(772) 00:16:54.536 fused_ordering(773) 00:16:54.536 fused_ordering(774) 00:16:54.536 fused_ordering(775) 00:16:54.536 fused_ordering(776) 00:16:54.536 fused_ordering(777) 00:16:54.536 fused_ordering(778) 00:16:54.536 fused_ordering(779) 00:16:54.536 fused_ordering(780) 00:16:54.536 fused_ordering(781) 00:16:54.536 fused_ordering(782) 00:16:54.536 fused_ordering(783) 00:16:54.536 fused_ordering(784) 00:16:54.536 fused_ordering(785) 00:16:54.536 fused_ordering(786) 00:16:54.536 fused_ordering(787) 00:16:54.536 fused_ordering(788) 00:16:54.536 fused_ordering(789) 00:16:54.536 fused_ordering(790) 00:16:54.536 fused_ordering(791) 00:16:54.536 fused_ordering(792) 00:16:54.536 fused_ordering(793) 00:16:54.536 fused_ordering(794) 00:16:54.536 fused_ordering(795) 00:16:54.536 fused_ordering(796) 00:16:54.536 fused_ordering(797) 00:16:54.536 fused_ordering(798) 00:16:54.536 fused_ordering(799) 00:16:54.536 fused_ordering(800) 00:16:54.536 fused_ordering(801) 00:16:54.536 fused_ordering(802) 00:16:54.536 fused_ordering(803) 00:16:54.536 fused_ordering(804) 00:16:54.536 fused_ordering(805) 00:16:54.536 fused_ordering(806) 00:16:54.536 fused_ordering(807) 00:16:54.536 fused_ordering(808) 00:16:54.536 fused_ordering(809) 00:16:54.536 fused_ordering(810) 00:16:54.536 fused_ordering(811) 00:16:54.536 fused_ordering(812) 00:16:54.536 fused_ordering(813) 00:16:54.536 fused_ordering(814) 00:16:54.536 fused_ordering(815) 00:16:54.536 fused_ordering(816) 00:16:54.536 fused_ordering(817) 00:16:54.536 fused_ordering(818) 00:16:54.536 fused_ordering(819) 00:16:54.536 fused_ordering(820) 00:16:55.103 fused_ordering(821) 00:16:55.103 fused_ordering(822) 00:16:55.103 fused_ordering(823) 00:16:55.103 fused_ordering(824) 00:16:55.103 fused_ordering(825) 00:16:55.103 fused_ordering(826) 00:16:55.103 fused_ordering(827) 00:16:55.103 fused_ordering(828) 00:16:55.103 fused_ordering(829) 00:16:55.103 fused_ordering(830) 00:16:55.103 fused_ordering(831) 00:16:55.103 fused_ordering(832) 00:16:55.103 fused_ordering(833) 00:16:55.103 fused_ordering(834) 00:16:55.103 fused_ordering(835) 00:16:55.103 fused_ordering(836) 00:16:55.103 fused_ordering(837) 00:16:55.103 fused_ordering(838) 00:16:55.103 fused_ordering(839) 00:16:55.103 fused_ordering(840) 00:16:55.103 fused_ordering(841) 00:16:55.103 fused_ordering(842) 00:16:55.103 fused_ordering(843) 00:16:55.103 fused_ordering(844) 00:16:55.103 fused_ordering(845) 00:16:55.103 fused_ordering(846) 00:16:55.103 fused_ordering(847) 00:16:55.103 fused_ordering(848) 00:16:55.103 fused_ordering(849) 00:16:55.103 fused_ordering(850) 00:16:55.103 fused_ordering(851) 00:16:55.103 fused_ordering(852) 00:16:55.103 fused_ordering(853) 00:16:55.103 fused_ordering(854) 00:16:55.103 fused_ordering(855) 00:16:55.103 fused_ordering(856) 00:16:55.103 fused_ordering(857) 00:16:55.103 fused_ordering(858) 00:16:55.103 fused_ordering(859) 00:16:55.103 fused_ordering(860) 00:16:55.103 fused_ordering(861) 00:16:55.103 fused_ordering(862) 00:16:55.103 fused_ordering(863) 00:16:55.103 fused_ordering(864) 00:16:55.103 fused_ordering(865) 00:16:55.103 fused_ordering(866) 00:16:55.103 fused_ordering(867) 00:16:55.103 fused_ordering(868) 00:16:55.103 fused_ordering(869) 00:16:55.103 fused_ordering(870) 00:16:55.103 fused_ordering(871) 00:16:55.103 fused_ordering(872) 00:16:55.103 fused_ordering(873) 00:16:55.103 fused_ordering(874) 00:16:55.103 fused_ordering(875) 00:16:55.103 fused_ordering(876) 00:16:55.103 fused_ordering(877) 00:16:55.103 fused_ordering(878) 00:16:55.103 fused_ordering(879) 00:16:55.103 fused_ordering(880) 00:16:55.103 fused_ordering(881) 00:16:55.103 fused_ordering(882) 00:16:55.103 fused_ordering(883) 00:16:55.103 fused_ordering(884) 00:16:55.103 fused_ordering(885) 00:16:55.103 fused_ordering(886) 00:16:55.103 fused_ordering(887) 00:16:55.103 fused_ordering(888) 00:16:55.103 fused_ordering(889) 00:16:55.103 fused_ordering(890) 00:16:55.103 fused_ordering(891) 00:16:55.103 fused_ordering(892) 00:16:55.103 fused_ordering(893) 00:16:55.103 fused_ordering(894) 00:16:55.103 fused_ordering(895) 00:16:55.103 fused_ordering(896) 00:16:55.103 fused_ordering(897) 00:16:55.103 fused_ordering(898) 00:16:55.103 fused_ordering(899) 00:16:55.103 fused_ordering(900) 00:16:55.103 fused_ordering(901) 00:16:55.103 fused_ordering(902) 00:16:55.103 fused_ordering(903) 00:16:55.103 fused_ordering(904) 00:16:55.103 fused_ordering(905) 00:16:55.103 fused_ordering(906) 00:16:55.103 fused_ordering(907) 00:16:55.103 fused_ordering(908) 00:16:55.103 fused_ordering(909) 00:16:55.103 fused_ordering(910) 00:16:55.103 fused_ordering(911) 00:16:55.103 fused_ordering(912) 00:16:55.103 fused_ordering(913) 00:16:55.103 fused_ordering(914) 00:16:55.103 fused_ordering(915) 00:16:55.103 fused_ordering(916) 00:16:55.103 fused_ordering(917) 00:16:55.103 fused_ordering(918) 00:16:55.103 fused_ordering(919) 00:16:55.103 fused_ordering(920) 00:16:55.103 fused_ordering(921) 00:16:55.103 fused_ordering(922) 00:16:55.103 fused_ordering(923) 00:16:55.103 fused_ordering(924) 00:16:55.103 fused_ordering(925) 00:16:55.103 fused_ordering(926) 00:16:55.103 fused_ordering(927) 00:16:55.103 fused_ordering(928) 00:16:55.103 fused_ordering(929) 00:16:55.103 fused_ordering(930) 00:16:55.103 fused_ordering(931) 00:16:55.103 fused_ordering(932) 00:16:55.103 fused_ordering(933) 00:16:55.103 fused_ordering(934) 00:16:55.103 fused_ordering(935) 00:16:55.103 fused_ordering(936) 00:16:55.103 fused_ordering(937) 00:16:55.103 fused_ordering(938) 00:16:55.103 fused_ordering(939) 00:16:55.103 fused_ordering(940) 00:16:55.103 fused_ordering(941) 00:16:55.103 fused_ordering(942) 00:16:55.103 fused_ordering(943) 00:16:55.103 fused_ordering(944) 00:16:55.103 fused_ordering(945) 00:16:55.103 fused_ordering(946) 00:16:55.103 fused_ordering(947) 00:16:55.103 fused_ordering(948) 00:16:55.103 fused_ordering(949) 00:16:55.103 fused_ordering(950) 00:16:55.103 fused_ordering(951) 00:16:55.103 fused_ordering(952) 00:16:55.103 fused_ordering(953) 00:16:55.103 fused_ordering(954) 00:16:55.103 fused_ordering(955) 00:16:55.103 fused_ordering(956) 00:16:55.103 fused_ordering(957) 00:16:55.103 fused_ordering(958) 00:16:55.103 fused_ordering(959) 00:16:55.103 fused_ordering(960) 00:16:55.103 fused_ordering(961) 00:16:55.103 fused_ordering(962) 00:16:55.103 fused_ordering(963) 00:16:55.103 fused_ordering(964) 00:16:55.103 fused_ordering(965) 00:16:55.103 fused_ordering(966) 00:16:55.103 fused_ordering(967) 00:16:55.103 fused_ordering(968) 00:16:55.103 fused_ordering(969) 00:16:55.103 fused_ordering(970) 00:16:55.103 fused_ordering(971) 00:16:55.103 fused_ordering(972) 00:16:55.103 fused_ordering(973) 00:16:55.103 fused_ordering(974) 00:16:55.103 fused_ordering(975) 00:16:55.103 fused_ordering(976) 00:16:55.103 fused_ordering(977) 00:16:55.103 fused_ordering(978) 00:16:55.103 fused_ordering(979) 00:16:55.103 fused_ordering(980) 00:16:55.103 fused_ordering(981) 00:16:55.103 fused_ordering(982) 00:16:55.103 fused_ordering(983) 00:16:55.103 fused_ordering(984) 00:16:55.103 fused_ordering(985) 00:16:55.103 fused_ordering(986) 00:16:55.103 fused_ordering(987) 00:16:55.103 fused_ordering(988) 00:16:55.104 fused_ordering(989) 00:16:55.104 fused_ordering(990) 00:16:55.104 fused_ordering(991) 00:16:55.104 fused_ordering(992) 00:16:55.104 fused_ordering(993) 00:16:55.104 fused_ordering(994) 00:16:55.104 fused_ordering(995) 00:16:55.104 fused_ordering(996) 00:16:55.104 fused_ordering(997) 00:16:55.104 fused_ordering(998) 00:16:55.104 fused_ordering(999) 00:16:55.104 fused_ordering(1000) 00:16:55.104 fused_ordering(1001) 00:16:55.104 fused_ordering(1002) 00:16:55.104 fused_ordering(1003) 00:16:55.104 fused_ordering(1004) 00:16:55.104 fused_ordering(1005) 00:16:55.104 fused_ordering(1006) 00:16:55.104 fused_ordering(1007) 00:16:55.104 fused_ordering(1008) 00:16:55.104 fused_ordering(1009) 00:16:55.104 fused_ordering(1010) 00:16:55.104 fused_ordering(1011) 00:16:55.104 fused_ordering(1012) 00:16:55.104 fused_ordering(1013) 00:16:55.104 fused_ordering(1014) 00:16:55.104 fused_ordering(1015) 00:16:55.104 fused_ordering(1016) 00:16:55.104 fused_ordering(1017) 00:16:55.104 fused_ordering(1018) 00:16:55.104 fused_ordering(1019) 00:16:55.104 fused_ordering(1020) 00:16:55.104 fused_ordering(1021) 00:16:55.104 fused_ordering(1022) 00:16:55.104 fused_ordering(1023) 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:55.104 rmmod nvme_tcp 00:16:55.104 rmmod nvme_fabrics 00:16:55.104 rmmod nvme_keyring 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 220439 ']' 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 220439 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 220439 ']' 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 220439 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 220439 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 220439' 00:16:55.104 killing process with pid 220439 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 220439 00:16:55.104 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 220439 00:16:55.364 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:55.364 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:55.364 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:55.364 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:16:55.364 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:16:55.364 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:55.364 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:16:55.364 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:55.364 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:55.364 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.364 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.364 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.272 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:57.272 00:16:57.272 real 0m7.367s 00:16:57.272 user 0m5.144s 00:16:57.272 sys 0m2.808s 00:16:57.272 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.272 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:57.272 ************************************ 00:16:57.272 END TEST nvmf_fused_ordering 00:16:57.272 ************************************ 00:16:57.272 02:58:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:57.272 02:58:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:57.272 02:58:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.272 02:58:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:57.531 ************************************ 00:16:57.531 START TEST nvmf_ns_masking 00:16:57.531 ************************************ 00:16:57.531 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:57.531 * Looking for test storage... 00:16:57.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.531 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:57.531 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:16:57.531 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:57.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.531 --rc genhtml_branch_coverage=1 00:16:57.531 --rc genhtml_function_coverage=1 00:16:57.531 --rc genhtml_legend=1 00:16:57.531 --rc geninfo_all_blocks=1 00:16:57.531 --rc geninfo_unexecuted_blocks=1 00:16:57.531 00:16:57.531 ' 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:57.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.531 --rc genhtml_branch_coverage=1 00:16:57.531 --rc genhtml_function_coverage=1 00:16:57.531 --rc genhtml_legend=1 00:16:57.531 --rc geninfo_all_blocks=1 00:16:57.531 --rc geninfo_unexecuted_blocks=1 00:16:57.531 00:16:57.531 ' 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:57.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.531 --rc genhtml_branch_coverage=1 00:16:57.531 --rc genhtml_function_coverage=1 00:16:57.531 --rc genhtml_legend=1 00:16:57.531 --rc geninfo_all_blocks=1 00:16:57.531 --rc geninfo_unexecuted_blocks=1 00:16:57.531 00:16:57.531 ' 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:57.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.531 --rc genhtml_branch_coverage=1 00:16:57.531 --rc genhtml_function_coverage=1 00:16:57.531 --rc genhtml_legend=1 00:16:57.531 --rc geninfo_all_blocks=1 00:16:57.531 --rc geninfo_unexecuted_blocks=1 00:16:57.531 00:16:57.531 ' 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:57.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=db22e07e-f572-476a-822d-e7503c7f6988 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=7f0f7b66-de29-41fc-b1f6-045b7159ca1c 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:57.531 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=5256429c-bd1e-4968-8ebc-6cd72d1ef59b 00:16:57.532 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:57.532 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:57.532 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.532 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:57.532 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:57.532 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:57.532 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.532 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.532 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.532 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:57.532 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:57.532 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:16:57.532 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:00.062 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:00.063 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:00.063 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:00.063 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:00.063 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:00.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:00.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:17:00.063 00:17:00.063 --- 10.0.0.2 ping statistics --- 00:17:00.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.063 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:00.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:00.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:17:00.063 00:17:00.063 --- 10.0.0.1 ping statistics --- 00:17:00.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.063 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=222676 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 222676 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 222676 ']' 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.063 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:00.063 [2024-11-19 02:58:10.428565] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:17:00.064 [2024-11-19 02:58:10.428650] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.064 [2024-11-19 02:58:10.500303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.064 [2024-11-19 02:58:10.547213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.064 [2024-11-19 02:58:10.547289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.064 [2024-11-19 02:58:10.547302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.064 [2024-11-19 02:58:10.547313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.064 [2024-11-19 02:58:10.547322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.064 [2024-11-19 02:58:10.547950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.064 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.064 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:00.064 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:00.064 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:00.064 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:00.321 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.321 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:00.579 [2024-11-19 02:58:10.988742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.579 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:00.579 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:00.579 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:00.837 Malloc1 00:17:00.837 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:01.095 Malloc2 00:17:01.095 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:01.353 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:01.610 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.175 [2024-11-19 02:58:12.521812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.175 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:02.175 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5256429c-bd1e-4968-8ebc-6cd72d1ef59b -a 10.0.0.2 -s 4420 -i 4 00:17:02.175 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:02.175 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:02.175 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:02.175 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:02.175 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:04.703 [ 0]:0x1 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f79ad2d4f6ee4f9c9b8e8bbed9ea4394 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f79ad2d4f6ee4f9c9b8e8bbed9ea4394 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:04.703 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:04.703 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:04.703 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:04.703 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:04.703 [ 0]:0x1 00:17:04.703 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:04.703 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:04.703 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f79ad2d4f6ee4f9c9b8e8bbed9ea4394 00:17:04.703 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f79ad2d4f6ee4f9c9b8e8bbed9ea4394 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:04.703 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:04.703 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:04.703 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:04.703 [ 1]:0x2 00:17:04.703 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:04.703 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:04.703 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56c45053219944529da2060fdf749383 00:17:04.703 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56c45053219944529da2060fdf749383 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:04.703 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:04.703 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:04.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.703 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:05.268 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:05.526 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:05.527 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5256429c-bd1e-4968-8ebc-6cd72d1ef59b -a 10.0.0.2 -s 4420 -i 4 00:17:05.784 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:05.784 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:05.784 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:05.784 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:05.784 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:05.784 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:07.684 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:07.685 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:07.685 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:07.685 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:07.685 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:07.685 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:07.685 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:07.685 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:07.685 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:07.685 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:07.685 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:07.685 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:07.685 [ 0]:0x2 00:17:07.685 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:07.685 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:07.943 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56c45053219944529da2060fdf749383 00:17:07.943 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56c45053219944529da2060fdf749383 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:07.943 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:08.201 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:08.201 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:08.201 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:08.201 [ 0]:0x1 00:17:08.201 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:08.201 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:08.201 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f79ad2d4f6ee4f9c9b8e8bbed9ea4394 00:17:08.201 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f79ad2d4f6ee4f9c9b8e8bbed9ea4394 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:08.201 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:08.201 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:08.201 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:08.201 [ 1]:0x2 00:17:08.201 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:08.201 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:08.201 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56c45053219944529da2060fdf749383 00:17:08.201 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56c45053219944529da2060fdf749383 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:08.201 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:08.460 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:08.460 [ 0]:0x2 00:17:08.460 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:08.460 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:08.460 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56c45053219944529da2060fdf749383 00:17:08.460 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56c45053219944529da2060fdf749383 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:08.460 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:08.460 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:08.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.724 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:08.981 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:08.981 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5256429c-bd1e-4968-8ebc-6cd72d1ef59b -a 10.0.0.2 -s 4420 -i 4 00:17:09.240 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:09.240 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:09.240 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:09.240 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:09.240 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:09.240 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:11.141 [ 0]:0x1 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f79ad2d4f6ee4f9c9b8e8bbed9ea4394 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f79ad2d4f6ee4f9c9b8e8bbed9ea4394 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:11.141 [ 1]:0x2 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:11.141 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:11.400 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56c45053219944529da2060fdf749383 00:17:11.400 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56c45053219944529da2060fdf749383 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:11.400 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:11.657 [ 0]:0x2 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56c45053219944529da2060fdf749383 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56c45053219944529da2060fdf749383 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:11.657 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:11.916 [2024-11-19 02:58:22.391357] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:11.916 request: 00:17:11.916 { 00:17:11.916 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.916 "nsid": 2, 00:17:11.916 "host": "nqn.2016-06.io.spdk:host1", 00:17:11.916 "method": "nvmf_ns_remove_host", 00:17:11.916 "req_id": 1 00:17:11.916 } 00:17:11.916 Got JSON-RPC error response 00:17:11.916 response: 00:17:11.916 { 00:17:11.916 "code": -32602, 00:17:11.916 "message": "Invalid parameters" 00:17:11.916 } 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:11.916 [ 0]:0x2 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:11.916 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:12.175 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56c45053219944529da2060fdf749383 00:17:12.175 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56c45053219944529da2060fdf749383 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:12.175 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:12.175 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:12.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.175 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=224291 00:17:12.175 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:12.175 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.175 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 224291 /var/tmp/host.sock 00:17:12.175 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 224291 ']' 00:17:12.175 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:12.175 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.175 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:12.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:12.175 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.175 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:12.175 [2024-11-19 02:58:22.735607] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:17:12.175 [2024-11-19 02:58:22.735700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224291 ] 00:17:12.433 [2024-11-19 02:58:22.802090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.433 [2024-11-19 02:58:22.847900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.691 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.691 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:12.691 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:12.949 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:13.207 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid db22e07e-f572-476a-822d-e7503c7f6988 00:17:13.207 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:13.207 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g DB22E07EF572476A822DE7503C7F6988 -i 00:17:13.465 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 7f0f7b66-de29-41fc-b1f6-045b7159ca1c 00:17:13.465 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:13.465 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 7F0F7B66DE2941FCB1F6045B7159CA1C -i 00:17:13.723 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:13.981 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:14.239 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:14.239 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:14.804 nvme0n1 00:17:14.805 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:14.805 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:15.063 nvme1n2 00:17:15.321 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:15.321 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:15.321 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:15.321 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:15.321 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:15.578 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:15.578 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:15.579 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:15.579 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:15.836 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ db22e07e-f572-476a-822d-e7503c7f6988 == \d\b\2\2\e\0\7\e\-\f\5\7\2\-\4\7\6\a\-\8\2\2\d\-\e\7\5\0\3\c\7\f\6\9\8\8 ]] 00:17:15.837 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:15.837 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:15.837 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:16.095 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 7f0f7b66-de29-41fc-b1f6-045b7159ca1c == \7\f\0\f\7\b\6\6\-\d\e\2\9\-\4\1\f\c\-\b\1\f\6\-\0\4\5\b\7\1\5\9\c\a\1\c ]] 00:17:16.095 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:16.353 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:16.611 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid db22e07e-f572-476a-822d-e7503c7f6988 00:17:16.611 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:16.611 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g DB22E07EF572476A822DE7503C7F6988 00:17:16.611 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:16.611 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g DB22E07EF572476A822DE7503C7F6988 00:17:16.611 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:16.611 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.611 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:16.611 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.611 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:16.611 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.611 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:16.611 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:16.611 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g DB22E07EF572476A822DE7503C7F6988 00:17:16.869 [2024-11-19 02:58:27.301934] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:16.869 [2024-11-19 02:58:27.302000] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:16.869 [2024-11-19 02:58:27.302025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.869 request: 00:17:16.869 { 00:17:16.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.869 "namespace": { 00:17:16.869 "bdev_name": "invalid", 00:17:16.869 "nsid": 1, 00:17:16.869 "nguid": "DB22E07EF572476A822DE7503C7F6988", 00:17:16.869 "no_auto_visible": false 00:17:16.869 }, 00:17:16.869 "method": "nvmf_subsystem_add_ns", 00:17:16.869 "req_id": 1 00:17:16.869 } 00:17:16.869 Got JSON-RPC error response 00:17:16.869 response: 00:17:16.869 { 00:17:16.869 "code": -32602, 00:17:16.869 "message": "Invalid parameters" 00:17:16.869 } 00:17:16.869 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:16.869 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.869 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.869 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.869 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid db22e07e-f572-476a-822d-e7503c7f6988 00:17:16.869 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:16.869 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g DB22E07EF572476A822DE7503C7F6988 -i 00:17:17.126 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:19.024 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:19.024 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:19.024 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:19.282 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:19.282 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 224291 00:17:19.282 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 224291 ']' 00:17:19.282 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 224291 00:17:19.282 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:19.282 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.282 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 224291 00:17:19.541 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:19.541 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:19.541 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 224291' 00:17:19.541 killing process with pid 224291 00:17:19.541 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 224291 00:17:19.541 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 224291 00:17:19.799 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.056 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:20.057 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:20.057 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:20.057 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:20.057 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:20.057 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:20.057 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:20.057 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:20.057 rmmod nvme_tcp 00:17:20.057 rmmod nvme_fabrics 00:17:20.315 rmmod nvme_keyring 00:17:20.315 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:20.315 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:20.315 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:20.315 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 222676 ']' 00:17:20.315 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 222676 00:17:20.315 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 222676 ']' 00:17:20.315 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 222676 00:17:20.315 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:20.315 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.315 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 222676 00:17:20.315 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:20.315 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:20.316 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 222676' 00:17:20.316 killing process with pid 222676 00:17:20.316 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 222676 00:17:20.316 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 222676 00:17:20.576 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:20.576 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:20.576 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:20.576 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:20.576 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:20.576 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:20.576 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:20.576 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:20.576 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:20.576 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.576 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:20.576 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.486 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:22.486 00:17:22.486 real 0m25.153s 00:17:22.486 user 0m36.598s 00:17:22.486 sys 0m4.750s 00:17:22.486 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.486 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:22.486 ************************************ 00:17:22.486 END TEST nvmf_ns_masking 00:17:22.486 ************************************ 00:17:22.486 02:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:22.486 02:58:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:22.486 02:58:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:22.486 02:58:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.486 02:58:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:22.486 ************************************ 00:17:22.486 START TEST nvmf_nvme_cli 00:17:22.486 ************************************ 00:17:22.486 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:22.745 * Looking for test storage... 00:17:22.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.745 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:22.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.746 --rc genhtml_branch_coverage=1 00:17:22.746 --rc genhtml_function_coverage=1 00:17:22.746 --rc genhtml_legend=1 00:17:22.746 --rc geninfo_all_blocks=1 00:17:22.746 --rc geninfo_unexecuted_blocks=1 00:17:22.746 00:17:22.746 ' 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:22.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.746 --rc genhtml_branch_coverage=1 00:17:22.746 --rc genhtml_function_coverage=1 00:17:22.746 --rc genhtml_legend=1 00:17:22.746 --rc geninfo_all_blocks=1 00:17:22.746 --rc geninfo_unexecuted_blocks=1 00:17:22.746 00:17:22.746 ' 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:22.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.746 --rc genhtml_branch_coverage=1 00:17:22.746 --rc genhtml_function_coverage=1 00:17:22.746 --rc genhtml_legend=1 00:17:22.746 --rc geninfo_all_blocks=1 00:17:22.746 --rc geninfo_unexecuted_blocks=1 00:17:22.746 00:17:22.746 ' 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:22.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.746 --rc genhtml_branch_coverage=1 00:17:22.746 --rc genhtml_function_coverage=1 00:17:22.746 --rc genhtml_legend=1 00:17:22.746 --rc geninfo_all_blocks=1 00:17:22.746 --rc geninfo_unexecuted_blocks=1 00:17:22.746 00:17:22.746 ' 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:22.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:22.746 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:22.747 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:22.747 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:22.747 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:22.747 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:22.747 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:22.747 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.747 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:22.747 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:22.747 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:22.747 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.747 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.747 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.747 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:22.747 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:22.747 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:22.747 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.280 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:25.280 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:25.280 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:25.280 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:25.280 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:25.280 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:25.281 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:25.281 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:25.281 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:25.281 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:25.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:25.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:17:25.281 00:17:25.281 --- 10.0.0.2 ping statistics --- 00:17:25.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.281 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:25.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:25.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:17:25.281 00:17:25.281 --- 10.0.0.1 ping statistics --- 00:17:25.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.281 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:25.281 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=227206 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 227206 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 227206 ']' 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.282 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.282 [2024-11-19 02:58:35.785558] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:17:25.282 [2024-11-19 02:58:35.785638] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.282 [2024-11-19 02:58:35.858663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:25.540 [2024-11-19 02:58:35.909637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.540 [2024-11-19 02:58:35.909716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.540 [2024-11-19 02:58:35.909732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.540 [2024-11-19 02:58:35.909744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.540 [2024-11-19 02:58:35.909764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.540 [2024-11-19 02:58:35.911440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.540 [2024-11-19 02:58:35.911506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.540 [2024-11-19 02:58:35.911550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:25.540 [2024-11-19 02:58:35.911553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.540 [2024-11-19 02:58:36.061436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.540 Malloc0 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.540 Malloc1 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.540 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.540 [2024-11-19 02:58:36.155503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.799 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.799 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:25.799 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.799 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.799 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.799 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:17:25.799 00:17:25.799 Discovery Log Number of Records 2, Generation counter 2 00:17:25.799 =====Discovery Log Entry 0====== 00:17:25.799 trtype: tcp 00:17:25.799 adrfam: ipv4 00:17:25.799 subtype: current discovery subsystem 00:17:25.799 treq: not required 00:17:25.799 portid: 0 00:17:25.799 trsvcid: 4420 00:17:25.799 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:25.799 traddr: 10.0.0.2 00:17:25.799 eflags: explicit discovery connections, duplicate discovery information 00:17:25.799 sectype: none 00:17:25.799 =====Discovery Log Entry 1====== 00:17:25.799 trtype: tcp 00:17:25.799 adrfam: ipv4 00:17:25.799 subtype: nvme subsystem 00:17:25.799 treq: not required 00:17:25.799 portid: 0 00:17:25.799 trsvcid: 4420 00:17:25.799 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:25.799 traddr: 10.0.0.2 00:17:25.799 eflags: none 00:17:25.799 sectype: none 00:17:25.799 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:25.799 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:25.799 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:25.799 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:25.799 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:25.799 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:25.799 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:25.799 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:25.799 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:25.799 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:25.799 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:26.800 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:26.800 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:17:26.800 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:26.800 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:26.800 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:26.800 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:28.795 /dev/nvme0n2 ]] 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:28.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:28.795 rmmod nvme_tcp 00:17:28.795 rmmod nvme_fabrics 00:17:28.795 rmmod nvme_keyring 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 227206 ']' 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 227206 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 227206 ']' 00:17:28.795 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 227206 00:17:28.796 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:17:28.796 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.796 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 227206 00:17:28.796 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.796 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.796 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 227206' 00:17:28.796 killing process with pid 227206 00:17:28.796 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 227206 00:17:28.796 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 227206 00:17:29.058 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:29.058 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:29.058 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:29.058 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:29.058 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:29.058 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:29.058 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:29.058 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:29.058 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:29.058 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.058 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.058 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.965 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:30.965 00:17:30.965 real 0m8.447s 00:17:30.965 user 0m14.934s 00:17:30.965 sys 0m2.431s 00:17:30.965 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.965 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:30.965 ************************************ 00:17:30.965 END TEST nvmf_nvme_cli 00:17:30.965 ************************************ 00:17:30.965 02:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:30.965 02:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:30.965 02:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:30.965 02:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.965 02:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:31.225 ************************************ 00:17:31.225 START TEST nvmf_vfio_user 00:17:31.225 ************************************ 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:31.225 * Looking for test storage... 00:17:31.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:31.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.225 --rc genhtml_branch_coverage=1 00:17:31.225 --rc genhtml_function_coverage=1 00:17:31.225 --rc genhtml_legend=1 00:17:31.225 --rc geninfo_all_blocks=1 00:17:31.225 --rc geninfo_unexecuted_blocks=1 00:17:31.225 00:17:31.225 ' 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:31.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.225 --rc genhtml_branch_coverage=1 00:17:31.225 --rc genhtml_function_coverage=1 00:17:31.225 --rc genhtml_legend=1 00:17:31.225 --rc geninfo_all_blocks=1 00:17:31.225 --rc geninfo_unexecuted_blocks=1 00:17:31.225 00:17:31.225 ' 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:31.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.225 --rc genhtml_branch_coverage=1 00:17:31.225 --rc genhtml_function_coverage=1 00:17:31.225 --rc genhtml_legend=1 00:17:31.225 --rc geninfo_all_blocks=1 00:17:31.225 --rc geninfo_unexecuted_blocks=1 00:17:31.225 00:17:31.225 ' 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:31.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.225 --rc genhtml_branch_coverage=1 00:17:31.225 --rc genhtml_function_coverage=1 00:17:31.225 --rc genhtml_legend=1 00:17:31.225 --rc geninfo_all_blocks=1 00:17:31.225 --rc geninfo_unexecuted_blocks=1 00:17:31.225 00:17:31.225 ' 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.225 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:31.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=228134 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 228134' 00:17:31.226 Process pid: 228134 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 228134 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 228134 ']' 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.226 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:31.226 [2024-11-19 02:58:41.816969] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:17:31.226 [2024-11-19 02:58:41.817065] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.485 [2024-11-19 02:58:41.885493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:31.485 [2024-11-19 02:58:41.932139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.485 [2024-11-19 02:58:41.932193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.485 [2024-11-19 02:58:41.932207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.485 [2024-11-19 02:58:41.932217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.485 [2024-11-19 02:58:41.932227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.485 [2024-11-19 02:58:41.933636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.485 [2024-11-19 02:58:41.933706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.485 [2024-11-19 02:58:41.933770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:31.485 [2024-11-19 02:58:41.933773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.485 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.485 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:31.485 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:32.859 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:32.859 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:32.859 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:32.859 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:32.859 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:32.859 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:33.117 Malloc1 00:17:33.117 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:33.683 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:33.683 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:33.940 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:33.941 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:33.941 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:34.200 Malloc2 00:17:34.458 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:34.715 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:34.973 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:35.235 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:35.235 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:35.235 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:35.235 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:35.235 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:35.235 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:35.235 [2024-11-19 02:58:45.659648] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:17:35.235 [2024-11-19 02:58:45.659710] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228553 ] 00:17:35.235 [2024-11-19 02:58:45.709928] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:35.235 [2024-11-19 02:58:45.719117] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:35.235 [2024-11-19 02:58:45.719146] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb5756ff000 00:17:35.235 [2024-11-19 02:58:45.720109] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:35.235 [2024-11-19 02:58:45.721107] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:35.235 [2024-11-19 02:58:45.722111] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:35.235 [2024-11-19 02:58:45.723117] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:35.235 [2024-11-19 02:58:45.724124] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:35.235 [2024-11-19 02:58:45.725129] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:35.235 [2024-11-19 02:58:45.726134] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:35.235 [2024-11-19 02:58:45.727139] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:35.235 [2024-11-19 02:58:45.728145] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:35.235 [2024-11-19 02:58:45.728165] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb573bf5000 00:17:35.235 [2024-11-19 02:58:45.729283] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:35.235 [2024-11-19 02:58:45.744942] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:35.235 [2024-11-19 02:58:45.745005] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:35.235 [2024-11-19 02:58:45.747250] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:35.235 [2024-11-19 02:58:45.747304] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:35.235 [2024-11-19 02:58:45.747393] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:35.235 [2024-11-19 02:58:45.747429] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:35.235 [2024-11-19 02:58:45.747441] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:35.235 [2024-11-19 02:58:45.748250] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:35.235 [2024-11-19 02:58:45.748270] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:35.235 [2024-11-19 02:58:45.748282] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:35.235 [2024-11-19 02:58:45.749254] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:35.235 [2024-11-19 02:58:45.749275] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:35.235 [2024-11-19 02:58:45.749289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:35.235 [2024-11-19 02:58:45.750253] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:35.235 [2024-11-19 02:58:45.750272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:35.235 [2024-11-19 02:58:45.751259] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:35.235 [2024-11-19 02:58:45.751278] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:35.235 [2024-11-19 02:58:45.751287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:35.235 [2024-11-19 02:58:45.751299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:35.235 [2024-11-19 02:58:45.751408] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:35.235 [2024-11-19 02:58:45.751416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:35.235 [2024-11-19 02:58:45.751424] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:35.235 [2024-11-19 02:58:45.752271] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:35.235 [2024-11-19 02:58:45.753275] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:35.235 [2024-11-19 02:58:45.754279] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:35.235 [2024-11-19 02:58:45.755276] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:35.235 [2024-11-19 02:58:45.755368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:35.235 [2024-11-19 02:58:45.756294] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:35.235 [2024-11-19 02:58:45.756312] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:35.235 [2024-11-19 02:58:45.756324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:35.235 [2024-11-19 02:58:45.756350] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:35.235 [2024-11-19 02:58:45.756371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:35.235 [2024-11-19 02:58:45.756398] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:35.235 [2024-11-19 02:58:45.756408] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:35.235 [2024-11-19 02:58:45.756415] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:35.235 [2024-11-19 02:58:45.756435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:35.235 [2024-11-19 02:58:45.756489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:35.236 [2024-11-19 02:58:45.756506] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:35.236 [2024-11-19 02:58:45.756515] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:35.236 [2024-11-19 02:58:45.756522] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:35.236 [2024-11-19 02:58:45.756530] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:35.236 [2024-11-19 02:58:45.756541] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:35.236 [2024-11-19 02:58:45.756550] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:35.236 [2024-11-19 02:58:45.756558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.756573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.756589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:35.236 [2024-11-19 02:58:45.756602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:35.236 [2024-11-19 02:58:45.756619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.236 [2024-11-19 02:58:45.756632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.236 [2024-11-19 02:58:45.756644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.236 [2024-11-19 02:58:45.756656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.236 [2024-11-19 02:58:45.756679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.756700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.756716] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:35.236 [2024-11-19 02:58:45.756743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:35.236 [2024-11-19 02:58:45.756763] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:35.236 [2024-11-19 02:58:45.756774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.756786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.756796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.756810] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:35.236 [2024-11-19 02:58:45.756823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:35.236 [2024-11-19 02:58:45.756894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.756910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.756924] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:35.236 [2024-11-19 02:58:45.756932] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:35.236 [2024-11-19 02:58:45.756939] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:35.236 [2024-11-19 02:58:45.756948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:35.236 [2024-11-19 02:58:45.756965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:35.236 [2024-11-19 02:58:45.756984] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:35.236 [2024-11-19 02:58:45.757005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.757021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.757050] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:35.236 [2024-11-19 02:58:45.757058] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:35.236 [2024-11-19 02:58:45.757064] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:35.236 [2024-11-19 02:58:45.757073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:35.236 [2024-11-19 02:58:45.757118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:35.236 [2024-11-19 02:58:45.757143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.757157] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.757169] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:35.236 [2024-11-19 02:58:45.757176] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:35.236 [2024-11-19 02:58:45.757182] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:35.236 [2024-11-19 02:58:45.757194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:35.236 [2024-11-19 02:58:45.757206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:35.236 [2024-11-19 02:58:45.757221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.757232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.757246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.757256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.757265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.757272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.757281] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:35.236 [2024-11-19 02:58:45.757289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:35.236 [2024-11-19 02:58:45.757297] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:35.236 [2024-11-19 02:58:45.757324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:35.236 [2024-11-19 02:58:45.757342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:35.236 [2024-11-19 02:58:45.757360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:35.236 [2024-11-19 02:58:45.757372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:35.236 [2024-11-19 02:58:45.757388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:35.236 [2024-11-19 02:58:45.757403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:35.236 [2024-11-19 02:58:45.757418] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:35.236 [2024-11-19 02:58:45.757430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:35.237 [2024-11-19 02:58:45.757452] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:35.237 [2024-11-19 02:58:45.757462] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:35.237 [2024-11-19 02:58:45.757468] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:35.237 [2024-11-19 02:58:45.757474] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:35.237 [2024-11-19 02:58:45.757479] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:35.237 [2024-11-19 02:58:45.757488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:35.237 [2024-11-19 02:58:45.757499] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:35.237 [2024-11-19 02:58:45.757510] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:35.237 [2024-11-19 02:58:45.757516] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:35.237 [2024-11-19 02:58:45.757525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:35.237 [2024-11-19 02:58:45.757536] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:35.237 [2024-11-19 02:58:45.757543] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:35.237 [2024-11-19 02:58:45.757549] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:35.237 [2024-11-19 02:58:45.757557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:35.237 [2024-11-19 02:58:45.757569] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:35.237 [2024-11-19 02:58:45.757577] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:35.237 [2024-11-19 02:58:45.757583] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:35.237 [2024-11-19 02:58:45.757592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:35.237 [2024-11-19 02:58:45.757603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:35.237 [2024-11-19 02:58:45.757624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:35.237 [2024-11-19 02:58:45.757642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:35.237 [2024-11-19 02:58:45.757654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:35.237 ===================================================== 00:17:35.237 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:35.237 ===================================================== 00:17:35.237 Controller Capabilities/Features 00:17:35.237 ================================ 00:17:35.237 Vendor ID: 4e58 00:17:35.237 Subsystem Vendor ID: 4e58 00:17:35.237 Serial Number: SPDK1 00:17:35.237 Model Number: SPDK bdev Controller 00:17:35.237 Firmware Version: 25.01 00:17:35.237 Recommended Arb Burst: 6 00:17:35.237 IEEE OUI Identifier: 8d 6b 50 00:17:35.237 Multi-path I/O 00:17:35.237 May have multiple subsystem ports: Yes 00:17:35.237 May have multiple controllers: Yes 00:17:35.237 Associated with SR-IOV VF: No 00:17:35.237 Max Data Transfer Size: 131072 00:17:35.237 Max Number of Namespaces: 32 00:17:35.237 Max Number of I/O Queues: 127 00:17:35.237 NVMe Specification Version (VS): 1.3 00:17:35.237 NVMe Specification Version (Identify): 1.3 00:17:35.237 Maximum Queue Entries: 256 00:17:35.237 Contiguous Queues Required: Yes 00:17:35.237 Arbitration Mechanisms Supported 00:17:35.237 Weighted Round Robin: Not Supported 00:17:35.237 Vendor Specific: Not Supported 00:17:35.237 Reset Timeout: 15000 ms 00:17:35.237 Doorbell Stride: 4 bytes 00:17:35.237 NVM Subsystem Reset: Not Supported 00:17:35.237 Command Sets Supported 00:17:35.237 NVM Command Set: Supported 00:17:35.237 Boot Partition: Not Supported 00:17:35.237 Memory Page Size Minimum: 4096 bytes 00:17:35.237 Memory Page Size Maximum: 4096 bytes 00:17:35.237 Persistent Memory Region: Not Supported 00:17:35.237 Optional Asynchronous Events Supported 00:17:35.237 Namespace Attribute Notices: Supported 00:17:35.237 Firmware Activation Notices: Not Supported 00:17:35.237 ANA Change Notices: Not Supported 00:17:35.237 PLE Aggregate Log Change Notices: Not Supported 00:17:35.237 LBA Status Info Alert Notices: Not Supported 00:17:35.237 EGE Aggregate Log Change Notices: Not Supported 00:17:35.237 Normal NVM Subsystem Shutdown event: Not Supported 00:17:35.237 Zone Descriptor Change Notices: Not Supported 00:17:35.237 Discovery Log Change Notices: Not Supported 00:17:35.237 Controller Attributes 00:17:35.237 128-bit Host Identifier: Supported 00:17:35.237 Non-Operational Permissive Mode: Not Supported 00:17:35.237 NVM Sets: Not Supported 00:17:35.237 Read Recovery Levels: Not Supported 00:17:35.237 Endurance Groups: Not Supported 00:17:35.237 Predictable Latency Mode: Not Supported 00:17:35.237 Traffic Based Keep ALive: Not Supported 00:17:35.237 Namespace Granularity: Not Supported 00:17:35.237 SQ Associations: Not Supported 00:17:35.237 UUID List: Not Supported 00:17:35.237 Multi-Domain Subsystem: Not Supported 00:17:35.237 Fixed Capacity Management: Not Supported 00:17:35.237 Variable Capacity Management: Not Supported 00:17:35.237 Delete Endurance Group: Not Supported 00:17:35.237 Delete NVM Set: Not Supported 00:17:35.237 Extended LBA Formats Supported: Not Supported 00:17:35.237 Flexible Data Placement Supported: Not Supported 00:17:35.237 00:17:35.237 Controller Memory Buffer Support 00:17:35.237 ================================ 00:17:35.237 Supported: No 00:17:35.237 00:17:35.237 Persistent Memory Region Support 00:17:35.237 ================================ 00:17:35.237 Supported: No 00:17:35.237 00:17:35.237 Admin Command Set Attributes 00:17:35.237 ============================ 00:17:35.237 Security Send/Receive: Not Supported 00:17:35.237 Format NVM: Not Supported 00:17:35.237 Firmware Activate/Download: Not Supported 00:17:35.237 Namespace Management: Not Supported 00:17:35.237 Device Self-Test: Not Supported 00:17:35.237 Directives: Not Supported 00:17:35.237 NVMe-MI: Not Supported 00:17:35.237 Virtualization Management: Not Supported 00:17:35.237 Doorbell Buffer Config: Not Supported 00:17:35.237 Get LBA Status Capability: Not Supported 00:17:35.237 Command & Feature Lockdown Capability: Not Supported 00:17:35.237 Abort Command Limit: 4 00:17:35.237 Async Event Request Limit: 4 00:17:35.237 Number of Firmware Slots: N/A 00:17:35.237 Firmware Slot 1 Read-Only: N/A 00:17:35.237 Firmware Activation Without Reset: N/A 00:17:35.237 Multiple Update Detection Support: N/A 00:17:35.237 Firmware Update Granularity: No Information Provided 00:17:35.237 Per-Namespace SMART Log: No 00:17:35.237 Asymmetric Namespace Access Log Page: Not Supported 00:17:35.237 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:35.237 Command Effects Log Page: Supported 00:17:35.237 Get Log Page Extended Data: Supported 00:17:35.237 Telemetry Log Pages: Not Supported 00:17:35.237 Persistent Event Log Pages: Not Supported 00:17:35.237 Supported Log Pages Log Page: May Support 00:17:35.238 Commands Supported & Effects Log Page: Not Supported 00:17:35.238 Feature Identifiers & Effects Log Page:May Support 00:17:35.238 NVMe-MI Commands & Effects Log Page: May Support 00:17:35.238 Data Area 4 for Telemetry Log: Not Supported 00:17:35.238 Error Log Page Entries Supported: 128 00:17:35.238 Keep Alive: Supported 00:17:35.238 Keep Alive Granularity: 10000 ms 00:17:35.238 00:17:35.238 NVM Command Set Attributes 00:17:35.238 ========================== 00:17:35.238 Submission Queue Entry Size 00:17:35.238 Max: 64 00:17:35.238 Min: 64 00:17:35.238 Completion Queue Entry Size 00:17:35.238 Max: 16 00:17:35.238 Min: 16 00:17:35.238 Number of Namespaces: 32 00:17:35.238 Compare Command: Supported 00:17:35.238 Write Uncorrectable Command: Not Supported 00:17:35.238 Dataset Management Command: Supported 00:17:35.238 Write Zeroes Command: Supported 00:17:35.238 Set Features Save Field: Not Supported 00:17:35.238 Reservations: Not Supported 00:17:35.238 Timestamp: Not Supported 00:17:35.238 Copy: Supported 00:17:35.238 Volatile Write Cache: Present 00:17:35.238 Atomic Write Unit (Normal): 1 00:17:35.238 Atomic Write Unit (PFail): 1 00:17:35.238 Atomic Compare & Write Unit: 1 00:17:35.238 Fused Compare & Write: Supported 00:17:35.238 Scatter-Gather List 00:17:35.238 SGL Command Set: Supported (Dword aligned) 00:17:35.238 SGL Keyed: Not Supported 00:17:35.238 SGL Bit Bucket Descriptor: Not Supported 00:17:35.238 SGL Metadata Pointer: Not Supported 00:17:35.238 Oversized SGL: Not Supported 00:17:35.238 SGL Metadata Address: Not Supported 00:17:35.238 SGL Offset: Not Supported 00:17:35.238 Transport SGL Data Block: Not Supported 00:17:35.238 Replay Protected Memory Block: Not Supported 00:17:35.238 00:17:35.238 Firmware Slot Information 00:17:35.238 ========================= 00:17:35.238 Active slot: 1 00:17:35.238 Slot 1 Firmware Revision: 25.01 00:17:35.238 00:17:35.238 00:17:35.238 Commands Supported and Effects 00:17:35.238 ============================== 00:17:35.238 Admin Commands 00:17:35.238 -------------- 00:17:35.238 Get Log Page (02h): Supported 00:17:35.238 Identify (06h): Supported 00:17:35.238 Abort (08h): Supported 00:17:35.238 Set Features (09h): Supported 00:17:35.238 Get Features (0Ah): Supported 00:17:35.238 Asynchronous Event Request (0Ch): Supported 00:17:35.238 Keep Alive (18h): Supported 00:17:35.238 I/O Commands 00:17:35.238 ------------ 00:17:35.238 Flush (00h): Supported LBA-Change 00:17:35.238 Write (01h): Supported LBA-Change 00:17:35.238 Read (02h): Supported 00:17:35.238 Compare (05h): Supported 00:17:35.238 Write Zeroes (08h): Supported LBA-Change 00:17:35.238 Dataset Management (09h): Supported LBA-Change 00:17:35.238 Copy (19h): Supported LBA-Change 00:17:35.238 00:17:35.238 Error Log 00:17:35.238 ========= 00:17:35.238 00:17:35.238 Arbitration 00:17:35.238 =========== 00:17:35.238 Arbitration Burst: 1 00:17:35.238 00:17:35.238 Power Management 00:17:35.238 ================ 00:17:35.238 Number of Power States: 1 00:17:35.238 Current Power State: Power State #0 00:17:35.238 Power State #0: 00:17:35.238 Max Power: 0.00 W 00:17:35.238 Non-Operational State: Operational 00:17:35.238 Entry Latency: Not Reported 00:17:35.238 Exit Latency: Not Reported 00:17:35.238 Relative Read Throughput: 0 00:17:35.238 Relative Read Latency: 0 00:17:35.238 Relative Write Throughput: 0 00:17:35.238 Relative Write Latency: 0 00:17:35.238 Idle Power: Not Reported 00:17:35.238 Active Power: Not Reported 00:17:35.238 Non-Operational Permissive Mode: Not Supported 00:17:35.238 00:17:35.238 Health Information 00:17:35.238 ================== 00:17:35.238 Critical Warnings: 00:17:35.238 Available Spare Space: OK 00:17:35.238 Temperature: OK 00:17:35.238 Device Reliability: OK 00:17:35.238 Read Only: No 00:17:35.238 Volatile Memory Backup: OK 00:17:35.238 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:35.238 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:35.238 Available Spare: 0% 00:17:35.238 Available Sp[2024-11-19 02:58:45.757796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:35.238 [2024-11-19 02:58:45.757813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:35.238 [2024-11-19 02:58:45.757855] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:35.238 [2024-11-19 02:58:45.757872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.238 [2024-11-19 02:58:45.757884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.238 [2024-11-19 02:58:45.757894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.238 [2024-11-19 02:58:45.757904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.238 [2024-11-19 02:58:45.760702] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:35.238 [2024-11-19 02:58:45.760725] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:35.238 [2024-11-19 02:58:45.761315] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:35.238 [2024-11-19 02:58:45.761395] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:35.238 [2024-11-19 02:58:45.761409] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:35.238 [2024-11-19 02:58:45.762329] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:35.238 [2024-11-19 02:58:45.762352] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:35.238 [2024-11-19 02:58:45.762410] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:35.238 [2024-11-19 02:58:45.765702] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:35.238 are Threshold: 0% 00:17:35.238 Life Percentage Used: 0% 00:17:35.238 Data Units Read: 0 00:17:35.238 Data Units Written: 0 00:17:35.238 Host Read Commands: 0 00:17:35.238 Host Write Commands: 0 00:17:35.238 Controller Busy Time: 0 minutes 00:17:35.238 Power Cycles: 0 00:17:35.238 Power On Hours: 0 hours 00:17:35.238 Unsafe Shutdowns: 0 00:17:35.238 Unrecoverable Media Errors: 0 00:17:35.238 Lifetime Error Log Entries: 0 00:17:35.238 Warning Temperature Time: 0 minutes 00:17:35.238 Critical Temperature Time: 0 minutes 00:17:35.238 00:17:35.238 Number of Queues 00:17:35.238 ================ 00:17:35.238 Number of I/O Submission Queues: 127 00:17:35.238 Number of I/O Completion Queues: 127 00:17:35.238 00:17:35.238 Active Namespaces 00:17:35.238 ================= 00:17:35.238 Namespace ID:1 00:17:35.238 Error Recovery Timeout: Unlimited 00:17:35.239 Command Set Identifier: NVM (00h) 00:17:35.239 Deallocate: Supported 00:17:35.239 Deallocated/Unwritten Error: Not Supported 00:17:35.239 Deallocated Read Value: Unknown 00:17:35.239 Deallocate in Write Zeroes: Not Supported 00:17:35.239 Deallocated Guard Field: 0xFFFF 00:17:35.239 Flush: Supported 00:17:35.239 Reservation: Supported 00:17:35.239 Namespace Sharing Capabilities: Multiple Controllers 00:17:35.239 Size (in LBAs): 131072 (0GiB) 00:17:35.239 Capacity (in LBAs): 131072 (0GiB) 00:17:35.239 Utilization (in LBAs): 131072 (0GiB) 00:17:35.239 NGUID: FA08637045EA448EAC72CB55382669DC 00:17:35.239 UUID: fa086370-45ea-448e-ac72-cb55382669dc 00:17:35.239 Thin Provisioning: Not Supported 00:17:35.239 Per-NS Atomic Units: Yes 00:17:35.239 Atomic Boundary Size (Normal): 0 00:17:35.239 Atomic Boundary Size (PFail): 0 00:17:35.239 Atomic Boundary Offset: 0 00:17:35.239 Maximum Single Source Range Length: 65535 00:17:35.239 Maximum Copy Length: 65535 00:17:35.239 Maximum Source Range Count: 1 00:17:35.239 NGUID/EUI64 Never Reused: No 00:17:35.239 Namespace Write Protected: No 00:17:35.239 Number of LBA Formats: 1 00:17:35.239 Current LBA Format: LBA Format #00 00:17:35.239 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:35.239 00:17:35.239 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:35.497 [2024-11-19 02:58:46.016606] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:40.763 Initializing NVMe Controllers 00:17:40.763 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:40.763 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:40.763 Initialization complete. Launching workers. 00:17:40.763 ======================================================== 00:17:40.763 Latency(us) 00:17:40.764 Device Information : IOPS MiB/s Average min max 00:17:40.764 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33720.80 131.72 3795.79 1195.19 8987.20 00:17:40.764 ======================================================== 00:17:40.764 Total : 33720.80 131.72 3795.79 1195.19 8987.20 00:17:40.764 00:17:40.764 [2024-11-19 02:58:51.039091] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:40.764 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:40.764 [2024-11-19 02:58:51.296253] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:46.033 Initializing NVMe Controllers 00:17:46.033 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:46.033 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:46.033 Initialization complete. Launching workers. 00:17:46.033 ======================================================== 00:17:46.033 Latency(us) 00:17:46.033 Device Information : IOPS MiB/s Average min max 00:17:46.033 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15972.47 62.39 8019.03 6000.44 15962.33 00:17:46.033 ======================================================== 00:17:46.033 Total : 15972.47 62.39 8019.03 6000.44 15962.33 00:17:46.033 00:17:46.033 [2024-11-19 02:58:56.334452] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:46.033 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:46.033 [2024-11-19 02:58:56.558576] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:51.301 [2024-11-19 02:59:01.624992] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:51.301 Initializing NVMe Controllers 00:17:51.301 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:51.301 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:51.301 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:51.301 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:51.301 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:51.301 Initialization complete. Launching workers. 00:17:51.301 Starting thread on core 2 00:17:51.301 Starting thread on core 3 00:17:51.301 Starting thread on core 1 00:17:51.301 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:51.559 [2024-11-19 02:59:01.941326] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:54.846 [2024-11-19 02:59:05.005067] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:54.846 Initializing NVMe Controllers 00:17:54.846 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:54.846 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:54.846 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:54.846 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:54.846 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:54.846 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:54.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:54.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:54.846 Initialization complete. Launching workers. 00:17:54.846 Starting thread on core 1 with urgent priority queue 00:17:54.846 Starting thread on core 2 with urgent priority queue 00:17:54.846 Starting thread on core 3 with urgent priority queue 00:17:54.846 Starting thread on core 0 with urgent priority queue 00:17:54.846 SPDK bdev Controller (SPDK1 ) core 0: 5464.67 IO/s 18.30 secs/100000 ios 00:17:54.846 SPDK bdev Controller (SPDK1 ) core 1: 5260.67 IO/s 19.01 secs/100000 ios 00:17:54.846 SPDK bdev Controller (SPDK1 ) core 2: 5370.00 IO/s 18.62 secs/100000 ios 00:17:54.846 SPDK bdev Controller (SPDK1 ) core 3: 5054.00 IO/s 19.79 secs/100000 ios 00:17:54.846 ======================================================== 00:17:54.846 00:17:54.846 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:54.846 [2024-11-19 02:59:05.328205] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:54.846 Initializing NVMe Controllers 00:17:54.846 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:54.846 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:54.846 Namespace ID: 1 size: 0GB 00:17:54.846 Initialization complete. 00:17:54.846 INFO: using host memory buffer for IO 00:17:54.846 Hello world! 00:17:54.846 [2024-11-19 02:59:05.361731] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:54.846 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:55.104 [2024-11-19 02:59:05.662580] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:56.479 Initializing NVMe Controllers 00:17:56.479 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:56.479 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:56.479 Initialization complete. Launching workers. 00:17:56.479 submit (in ns) avg, min, max = 8686.1, 3583.3, 4016073.3 00:17:56.479 complete (in ns) avg, min, max = 26486.7, 2116.7, 6993392.2 00:17:56.479 00:17:56.479 Submit histogram 00:17:56.479 ================ 00:17:56.479 Range in us Cumulative Count 00:17:56.479 3.579 - 3.603: 0.2096% ( 27) 00:17:56.479 3.603 - 3.627: 0.7685% ( 72) 00:17:56.479 3.627 - 3.650: 2.4841% ( 221) 00:17:56.479 3.650 - 3.674: 5.8842% ( 438) 00:17:56.479 3.674 - 3.698: 12.6921% ( 877) 00:17:56.479 3.698 - 3.721: 20.6257% ( 1022) 00:17:56.479 3.721 - 3.745: 29.5451% ( 1149) 00:17:56.479 3.745 - 3.769: 37.4088% ( 1013) 00:17:56.479 3.769 - 3.793: 45.2492% ( 1010) 00:17:56.479 3.793 - 3.816: 52.3211% ( 911) 00:17:56.479 3.816 - 3.840: 57.6774% ( 690) 00:17:56.479 3.840 - 3.864: 62.1487% ( 576) 00:17:56.479 3.864 - 3.887: 65.7584% ( 465) 00:17:56.479 3.887 - 3.911: 69.3215% ( 459) 00:17:56.479 3.911 - 3.935: 72.9856% ( 472) 00:17:56.479 3.935 - 3.959: 76.9601% ( 512) 00:17:56.479 3.959 - 3.982: 80.7095% ( 483) 00:17:56.479 3.982 - 4.006: 83.7680% ( 394) 00:17:56.479 4.006 - 4.030: 86.1512% ( 307) 00:17:56.479 4.030 - 4.053: 88.1307% ( 255) 00:17:56.479 4.053 - 4.077: 89.6212% ( 192) 00:17:56.479 4.077 - 4.101: 90.8166% ( 154) 00:17:56.479 4.101 - 4.124: 91.9888% ( 151) 00:17:56.479 4.124 - 4.148: 92.9204% ( 120) 00:17:56.479 4.148 - 4.172: 93.6656% ( 96) 00:17:56.479 4.172 - 4.196: 94.1779% ( 66) 00:17:56.479 4.196 - 4.219: 94.6903% ( 66) 00:17:56.479 4.219 - 4.243: 95.0318% ( 44) 00:17:56.479 4.243 - 4.267: 95.3268% ( 38) 00:17:56.479 4.267 - 4.290: 95.5675% ( 31) 00:17:56.479 4.290 - 4.314: 95.7305% ( 21) 00:17:56.479 4.314 - 4.338: 95.9168% ( 24) 00:17:56.479 4.338 - 4.361: 96.0876% ( 22) 00:17:56.479 4.361 - 4.385: 96.1419% ( 7) 00:17:56.479 4.385 - 4.409: 96.2506% ( 14) 00:17:56.479 4.409 - 4.433: 96.3593% ( 14) 00:17:56.479 4.433 - 4.456: 96.4990% ( 18) 00:17:56.479 4.456 - 4.480: 96.5999% ( 13) 00:17:56.479 4.480 - 4.504: 96.6542% ( 7) 00:17:56.479 4.504 - 4.527: 96.7008% ( 6) 00:17:56.479 4.527 - 4.551: 96.7785% ( 10) 00:17:56.479 4.551 - 4.575: 96.8250% ( 6) 00:17:56.479 4.575 - 4.599: 96.8716% ( 6) 00:17:56.479 4.599 - 4.622: 96.8794% ( 1) 00:17:56.479 4.622 - 4.646: 96.9182% ( 5) 00:17:56.479 4.646 - 4.670: 96.9415% ( 3) 00:17:56.479 4.670 - 4.693: 96.9570% ( 2) 00:17:56.479 4.693 - 4.717: 96.9725% ( 2) 00:17:56.479 4.717 - 4.741: 97.0036% ( 4) 00:17:56.479 4.741 - 4.764: 97.0269% ( 3) 00:17:56.479 4.764 - 4.788: 97.0501% ( 3) 00:17:56.479 4.788 - 4.812: 97.0657% ( 2) 00:17:56.479 4.812 - 4.836: 97.0967% ( 4) 00:17:56.479 4.836 - 4.859: 97.1045% ( 1) 00:17:56.479 4.859 - 4.883: 97.1355% ( 4) 00:17:56.479 4.883 - 4.907: 97.1821% ( 6) 00:17:56.479 4.907 - 4.930: 97.2365% ( 7) 00:17:56.479 4.930 - 4.954: 97.3218% ( 11) 00:17:56.479 4.954 - 4.978: 97.3839% ( 8) 00:17:56.479 4.978 - 5.001: 97.4460% ( 8) 00:17:56.479 5.001 - 5.025: 97.5314% ( 11) 00:17:56.479 5.025 - 5.049: 97.6091% ( 10) 00:17:56.479 5.049 - 5.073: 97.6401% ( 4) 00:17:56.479 5.073 - 5.096: 97.7022% ( 8) 00:17:56.479 5.096 - 5.120: 97.7643% ( 8) 00:17:56.479 5.120 - 5.144: 97.8109% ( 6) 00:17:56.479 5.144 - 5.167: 97.8420% ( 4) 00:17:56.479 5.167 - 5.191: 97.8730% ( 4) 00:17:56.479 5.191 - 5.215: 97.9041% ( 4) 00:17:56.479 5.215 - 5.239: 97.9273% ( 3) 00:17:56.479 5.239 - 5.262: 97.9662% ( 5) 00:17:56.479 5.262 - 5.286: 97.9739% ( 1) 00:17:56.479 5.286 - 5.310: 98.0127% ( 5) 00:17:56.479 5.310 - 5.333: 98.0515% ( 5) 00:17:56.479 5.333 - 5.357: 98.0826% ( 4) 00:17:56.479 5.357 - 5.381: 98.0981% ( 2) 00:17:56.479 5.452 - 5.476: 98.1136% ( 2) 00:17:56.479 5.476 - 5.499: 98.1369% ( 3) 00:17:56.479 5.499 - 5.523: 98.1680% ( 4) 00:17:56.479 5.547 - 5.570: 98.1835% ( 2) 00:17:56.479 5.594 - 5.618: 98.1913% ( 1) 00:17:56.479 5.641 - 5.665: 98.2068% ( 2) 00:17:56.479 5.665 - 5.689: 98.2146% ( 1) 00:17:56.479 5.689 - 5.713: 98.2223% ( 1) 00:17:56.479 5.713 - 5.736: 98.2301% ( 1) 00:17:56.479 5.760 - 5.784: 98.2379% ( 1) 00:17:56.479 5.784 - 5.807: 98.2456% ( 1) 00:17:56.479 5.879 - 5.902: 98.2611% ( 2) 00:17:56.479 5.950 - 5.973: 98.2689% ( 1) 00:17:56.479 5.997 - 6.021: 98.2767% ( 1) 00:17:56.479 6.044 - 6.068: 98.2844% ( 1) 00:17:56.479 6.068 - 6.116: 98.2922% ( 1) 00:17:56.479 6.116 - 6.163: 98.3000% ( 1) 00:17:56.480 6.210 - 6.258: 98.3077% ( 1) 00:17:56.480 6.305 - 6.353: 98.3155% ( 1) 00:17:56.480 6.400 - 6.447: 98.3232% ( 1) 00:17:56.480 6.447 - 6.495: 98.3310% ( 1) 00:17:56.480 6.495 - 6.542: 98.3465% ( 2) 00:17:56.480 6.542 - 6.590: 98.3621% ( 2) 00:17:56.480 6.637 - 6.684: 98.3698% ( 1) 00:17:56.480 6.684 - 6.732: 98.3776% ( 1) 00:17:56.480 6.874 - 6.921: 98.3853% ( 1) 00:17:56.480 6.969 - 7.016: 98.3931% ( 1) 00:17:56.480 7.016 - 7.064: 98.4086% ( 2) 00:17:56.480 7.064 - 7.111: 98.4164% ( 1) 00:17:56.480 7.111 - 7.159: 98.4242% ( 1) 00:17:56.480 7.159 - 7.206: 98.4397% ( 2) 00:17:56.480 7.443 - 7.490: 98.4474% ( 1) 00:17:56.480 7.490 - 7.538: 98.4552% ( 1) 00:17:56.480 7.538 - 7.585: 98.4707% ( 2) 00:17:56.480 7.585 - 7.633: 98.4785% ( 1) 00:17:56.480 7.870 - 7.917: 98.5018% ( 3) 00:17:56.480 7.917 - 7.964: 98.5173% ( 2) 00:17:56.480 8.012 - 8.059: 98.5251% ( 1) 00:17:56.480 8.107 - 8.154: 98.5328% ( 1) 00:17:56.480 8.154 - 8.201: 98.5484% ( 2) 00:17:56.480 8.249 - 8.296: 98.5561% ( 1) 00:17:56.480 8.296 - 8.344: 98.5639% ( 1) 00:17:56.480 8.344 - 8.391: 98.5717% ( 1) 00:17:56.480 8.439 - 8.486: 98.5872% ( 2) 00:17:56.480 8.676 - 8.723: 98.5949% ( 1) 00:17:56.480 8.770 - 8.818: 98.6105% ( 2) 00:17:56.480 8.818 - 8.865: 98.6182% ( 1) 00:17:56.480 8.865 - 8.913: 98.6338% ( 2) 00:17:56.480 8.913 - 8.960: 98.6415% ( 1) 00:17:56.480 9.007 - 9.055: 98.6493% ( 1) 00:17:56.480 9.055 - 9.102: 98.6570% ( 1) 00:17:56.480 9.102 - 9.150: 98.6648% ( 1) 00:17:56.480 9.197 - 9.244: 98.6726% ( 1) 00:17:56.480 9.292 - 9.339: 98.6881% ( 2) 00:17:56.480 9.434 - 9.481: 98.6959% ( 1) 00:17:56.480 9.576 - 9.624: 98.7036% ( 1) 00:17:56.480 9.624 - 9.671: 98.7191% ( 2) 00:17:56.480 9.813 - 9.861: 98.7347% ( 2) 00:17:56.480 9.861 - 9.908: 98.7424% ( 1) 00:17:56.480 10.003 - 10.050: 98.7502% ( 1) 00:17:56.480 10.050 - 10.098: 98.7580% ( 1) 00:17:56.480 10.193 - 10.240: 98.7657% ( 1) 00:17:56.480 10.240 - 10.287: 98.7735% ( 1) 00:17:56.480 10.287 - 10.335: 98.7890% ( 2) 00:17:56.480 10.335 - 10.382: 98.7968% ( 1) 00:17:56.480 10.382 - 10.430: 98.8045% ( 1) 00:17:56.480 10.524 - 10.572: 98.8201% ( 2) 00:17:56.480 10.761 - 10.809: 98.8278% ( 1) 00:17:56.480 10.809 - 10.856: 98.8356% ( 1) 00:17:56.480 10.904 - 10.951: 98.8433% ( 1) 00:17:56.480 10.951 - 10.999: 98.8511% ( 1) 00:17:56.480 10.999 - 11.046: 98.8589% ( 1) 00:17:56.480 11.046 - 11.093: 98.8744% ( 2) 00:17:56.480 11.188 - 11.236: 98.8822% ( 1) 00:17:56.480 11.567 - 11.615: 98.8899% ( 1) 00:17:56.480 11.615 - 11.662: 98.8977% ( 1) 00:17:56.480 11.710 - 11.757: 98.9054% ( 1) 00:17:56.480 11.899 - 11.947: 98.9210% ( 2) 00:17:56.480 12.231 - 12.326: 98.9287% ( 1) 00:17:56.480 12.516 - 12.610: 98.9365% ( 1) 00:17:56.480 12.705 - 12.800: 98.9443% ( 1) 00:17:56.480 13.084 - 13.179: 98.9520% ( 1) 00:17:56.480 13.274 - 13.369: 98.9598% ( 1) 00:17:56.480 13.653 - 13.748: 98.9753% ( 2) 00:17:56.480 13.748 - 13.843: 98.9831% ( 1) 00:17:56.480 13.938 - 14.033: 98.9908% ( 1) 00:17:56.480 14.317 - 14.412: 99.0064% ( 2) 00:17:56.480 14.507 - 14.601: 99.0141% ( 1) 00:17:56.480 14.601 - 14.696: 99.0219% ( 1) 00:17:56.480 14.696 - 14.791: 99.0297% ( 1) 00:17:56.480 14.886 - 14.981: 99.0374% ( 1) 00:17:56.480 15.739 - 15.834: 99.0452% ( 1) 00:17:56.480 16.119 - 16.213: 99.0529% ( 1) 00:17:56.480 17.067 - 17.161: 99.0607% ( 1) 00:17:56.480 17.351 - 17.446: 99.0685% ( 1) 00:17:56.480 17.446 - 17.541: 99.0918% ( 3) 00:17:56.480 17.541 - 17.636: 99.1150% ( 3) 00:17:56.480 17.636 - 17.730: 99.1383% ( 3) 00:17:56.480 17.730 - 17.825: 99.1849% ( 6) 00:17:56.480 17.825 - 17.920: 99.2082% ( 3) 00:17:56.480 17.920 - 18.015: 99.2703% ( 8) 00:17:56.480 18.015 - 18.110: 99.3246% ( 7) 00:17:56.480 18.110 - 18.204: 99.3790% ( 7) 00:17:56.480 18.204 - 18.299: 99.4566% ( 10) 00:17:56.480 18.299 - 18.394: 99.5109% ( 7) 00:17:56.480 18.394 - 18.489: 99.5730% ( 8) 00:17:56.480 18.489 - 18.584: 99.6196% ( 6) 00:17:56.480 18.584 - 18.679: 99.6507% ( 4) 00:17:56.480 18.679 - 18.773: 99.6895% ( 5) 00:17:56.480 18.773 - 18.868: 99.7128% ( 3) 00:17:56.480 18.868 - 18.963: 99.7283% ( 2) 00:17:56.480 18.963 - 19.058: 99.7516% ( 3) 00:17:56.480 19.058 - 19.153: 99.7594% ( 1) 00:17:56.480 19.153 - 19.247: 99.7671% ( 1) 00:17:56.480 19.247 - 19.342: 99.7826% ( 2) 00:17:56.480 19.342 - 19.437: 99.7904% ( 1) 00:17:56.480 20.006 - 20.101: 99.7982% ( 1) 00:17:56.480 20.101 - 20.196: 99.8137% ( 2) 00:17:56.480 20.575 - 20.670: 99.8215% ( 1) 00:17:56.480 21.997 - 22.092: 99.8292% ( 1) 00:17:56.480 22.661 - 22.756: 99.8447% ( 2) 00:17:56.480 23.704 - 23.799: 99.8525% ( 1) 00:17:56.480 25.031 - 25.221: 99.8603% ( 1) 00:17:56.480 25.410 - 25.600: 99.8758% ( 2) 00:17:56.480 26.359 - 26.548: 99.8836% ( 1) 00:17:56.480 3980.705 - 4004.978: 99.9767% ( 12) 00:17:56.480 4004.978 - 4029.250: 100.0000% ( 3) 00:17:56.480 00:17:56.480 Complete histogram 00:17:56.480 ================== 00:17:56.480 Range in us Cumulative Count 00:17:56.480 2.110 - 2.121: 0.5279% ( 68) 00:17:56.480 2.121 - 2.133: 27.1464% ( 3429) 00:17:56.480 2.133 - 2.145: 48.9287% ( 2806) 00:17:56.480 2.145 - 2.157: 51.0946% ( 279) 00:17:56.480 2.157 - 2.169: 57.8016% ( 864) 00:17:56.480 2.169 - 2.181: 60.7670% ( 382) 00:17:56.480 2.181 - 2.193: 62.9017% ( 275) 00:17:56.480 2.193 - 2.204: 75.8733% ( 1671) 00:17:56.480 2.204 - 2.216: 82.1146% ( 804) 00:17:56.480 2.216 - 2.228: 83.3333% ( 157) 00:17:56.480 2.228 - 2.240: 86.2832% ( 380) 00:17:56.480 2.240 - 2.252: 87.8746% ( 205) 00:17:56.480 2.252 - 2.264: 88.5422% ( 86) 00:17:56.480 2.264 - 2.276: 90.9098% ( 305) 00:17:56.480 2.276 - 2.287: 93.2619% ( 303) 00:17:56.480 2.287 - 2.299: 93.5957% ( 43) 00:17:56.480 2.299 - 2.311: 93.9761% ( 49) 00:17:56.480 2.311 - 2.323: 94.3642% ( 50) 00:17:56.480 2.323 - 2.335: 94.5040% ( 18) 00:17:56.480 2.335 - 2.347: 94.7601% ( 33) 00:17:56.480 2.347 - 2.359: 95.1638% ( 52) 00:17:56.480 2.359 - 2.370: 95.3035% ( 18) 00:17:56.480 2.370 - 2.382: 95.3501% ( 6) 00:17:56.480 2.382 - 2.394: 95.4044% ( 7) 00:17:56.480 2.394 - 2.406: 95.4898% ( 11) 00:17:56.480 2.406 - 2.418: 95.5907% ( 13) 00:17:56.480 2.418 - 2.430: 95.8236% ( 30) 00:17:56.480 2.430 - 2.441: 96.1885% ( 47) 00:17:56.480 2.441 - 2.453: 96.5145% ( 42) 00:17:56.480 2.453 - 2.465: 96.7241% ( 27) 00:17:56.480 2.465 - 2.477: 96.9803% ( 33) 00:17:56.480 2.477 - 2.489: 97.1666% ( 24) 00:17:56.480 2.489 - 2.501: 97.3296% ( 21) 00:17:56.480 2.501 - 2.513: 97.4460% ( 15) 00:17:56.480 2.513 - 2.524: 97.5625% ( 15) 00:17:56.480 2.524 - 2.536: 97.6324% ( 9) 00:17:56.480 2.536 - 2.548: 97.6789% ( 6) 00:17:56.480 2.548 - 2.560: 97.7100% ( 4) 00:17:56.480 2.560 - 2.572: 97.7410% ( 4) 00:17:56.480 2.572 - 2.584: 97.7488% ( 1) 00:17:56.480 2.584 - 2.596: 97.7721% ( 3) 00:17:56.480 2.596 - 2.607: 97.8109% ( 5) 00:17:56.480 2.619 - 2.631: 97.8264% ( 2) 00:17:56.480 2.631 - 2.643: 97.8342% ( 1) 00:17:56.480 2.643 - 2.655: 97.8420% ( 1) 00:17:56.480 2.655 - 2.667: 97.8497% ( 1) 00:17:56.480 2.679 - 2.690: 97.8652% ( 2) 00:17:56.480 2.690 - 2.702: 97.8808% ( 2) 00:17:56.480 2.702 - 2.714: 97.9041% ( 3) 00:17:56.480 2.714 - 2.726: 97.9273% ( 3) 00:17:56.480 2.726 - 2.738: 97.9351% ( 1) 00:17:56.480 2.738 - 2.750: 97.9584% ( 3) 00:17:56.480 2.750 - 2.761: 97.9739% ( 2) 00:17:56.480 2.773 - 2.785: 98.0127% ( 5) 00:17:56.480 2.785 - 2.797: 98.0205% ( 1) 00:17:56.480 2.809 - 2.821: 98.0283% ( 1) 00:17:56.480 2.821 - 2.833: 98.0360% ( 1) 00:17:56.480 2.833 - 2.844: 98.0593% ( 3) 00:17:56.480 2.856 - 2.868: 98.0748% ( 2) 00:17:56.480 2.880 - 2.892: 98.0904% ( 2) 00:17:56.480 2.892 - 2.904: 98.1136% ( 3) 00:17:56.480 2.904 - 2.916: 98.1214% ( 1) 00:17:56.480 2.916 - 2.927: 98.1447% ( 3) 00:17:56.480 2.927 - 2.939: 98.1525% ( 1) 00:17:56.480 2.963 - 2.975: 98.1680% ( 2) 00:17:56.480 2.975 - 2.987: 98.1835% ( 2) 00:17:56.480 2.987 - 2.999: 98.1990% ( 2) 00:17:56.480 3.010 - 3.022: 98.2068% ( 1) 00:17:56.480 3.022 - 3.034: 98.2223% ( 2) 00:17:56.480 3.034 - 3.058: 98.2456% ( 3) 00:17:56.481 3.058 - 3.081: 98.2534% ( 1) 00:17:56.481 3.081 - 3.105: 98.2844% ( 4) 00:17:56.481 3.105 - 3.129: 98.3232% ( 5) 00:17:56.481 3.129 - 3.153: 98.3310% ( 1) 00:17:56.481 3.224 - 3.247: 98.3543% ( 3) 00:17:56.481 3.247 - 3.271: 98.3698% ( 2) 00:17:56.481 3.271 - 3.295: 98.3931% ( 3) 00:17:56.481 3.295 - 3.319: 98.4086% ( 2) 00:17:56.481 3.319 - 3.342: 98.4164% ( 1) 00:17:56.481 3.366 - 3.390: 98.4319% ( 2) 00:17:56.481 3.390 - 3.413: 98.4397% ( 1) 00:17:56.481 3.437 - 3.461: 98.4630% ( 3) 00:17:56.481 3.484 - 3.508: 98.4785% ( 2) 00:17:56.481 3.508 - 3.532: 98.5018% ( 3) 00:17:56.481 3.556 - 3.579: 98.5095% ( 1) 00:17:56.481 3.579 - 3.603: 98.5173% ( 1) 00:17:56.481 3.603 - 3.627: 98.5251% ( 1) 00:17:56.481 3.650 - 3.674: 98.5328% ( 1) 00:17:56.481 3.674 - 3.698: 98.5406% ( 1) 00:17:56.481 3.698 - 3.721: 98.5561% ( 2) 00:17:56.481 3.745 - 3.769: 98.5794% ( 3) 00:17:56.481 3.769 - 3.793: 98.5872% ( 1) 00:17:56.481 3.793 - 3.816: 98.6027% ( 2) 00:17:56.481 3.816 - 3.840: 98.6182% ( 2) 00:17:56.481 3.840 - 3.864: 98.6260% ( 1) 00:17:56.481 3.864 - 3.887: 98.6338% ( 1) 00:17:56.481 3.887 - 3.911: 98.6415% ( 1) 00:17:56.481 3.911 - 3.935: 98.6493% ( 1) 00:17:56.481 3.959 - 3.982: 98.6570% ( 1) 00:17:56.481 4.006 - 4.030: 98.6803% ( 3) 00:17:56.481 4.124 - 4.148: 98.6881% ( 1) 00:17:56.481 5.547 - 5.570: 98.6959% ( 1) 00:17:56.481 5.784 - 5.807: 98.7036% ( 1) 00:17:56.481 5.855 - 5.879: 98.7114% ( 1) 00:17:56.481 6.210 - 6.258: 98.7269% ( 2) 00:17:56.481 6.400 - 6.447: 98.7347% ( 1) 00:17:56.481 6.542 - 6.590: 98.7424% ( 1) 00:17:56.481 6.637 - 6.684: 98.7657% ( 3) 00:17:56.481 6.684 - 6.732: 98.7812% ( 2) 00:17:56.481 6.732 - 6.779: 98.7890% ( 1) 00:17:56.481 7.064 - 7.111: 98.7968% ( 1) 00:17:56.481 7.538 - 7.585: 98.8045% ( 1) 00:17:56.481 7.585 - 7.633: 98.8123% ( 1) 00:17:56.481 7.822 - 7.870: 98.8201% ( 1) 00:17:56.481 8.154 - 8.201: 98.8278% ( 1) 00:17:56.481 8.249 - 8.296: 98.8356% ( 1) 00:17:56.481 8.391 - 8.439: 98.8433% ( 1) 00:17:56.481 8.486 - 8.533: 98.8511% ( 1) 00:17:56.481 8.628 - 8.676: 98.8589% ( 1) 00:17:56.481 8.818 - 8.865: 98.8666% ( 1) 00:17:56.481 9.197 - 9.244: 98.8744% ( 1) 00:17:56.481 9.624 - 9.671: 98.8822% ( 1) 00:17:56.481 13.748 - 13.843: 98.8899% ( 1) 00:17:56.481 14.886 - 14.981: 98.8977% ( 1) 00:17:56.481 15.550 - 15.644: 98.9132% ( 2) 00:17:56.481 15.739 - 15.834: 98.9210% ( 1) 00:17:56.481 15.834 - 15.929: 98.9287% ( 1) 00:17:56.481 15.929 - 16.024: 98.9520% ( 3) 00:17:56.481 16.024 - 16.119: 98.9753% ( 3) 00:17:56.481 16.119 - 16.213: 99.0064% ( 4) 00:17:56.481 16.213 - 16.308: 99.0452% ( 5) 00:17:56.481 16.308 - 16.403: 99.0685% ( 3) 00:17:56.481 16.403 - 16.498: 99.0840% ( 2) 00:17:56.481 16.498 - 16.593: 99.1306% ( 6) 00:17:56.481 16.593 - 16.687: 99.1461% ( 2) 00:17:56.481 16.687 - 16.782: 99.1694%[2024-11-19 02:59:06.685653] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:56.481 ( 3) 00:17:56.481 16.782 - 16.877: 99.2082% ( 5) 00:17:56.481 16.877 - 16.972: 99.2470% ( 5) 00:17:56.481 16.972 - 17.067: 99.2858% ( 5) 00:17:56.481 17.067 - 17.161: 99.3014% ( 2) 00:17:56.481 17.161 - 17.256: 99.3246% ( 3) 00:17:56.481 17.256 - 17.351: 99.3402% ( 2) 00:17:56.481 17.351 - 17.446: 99.3635% ( 3) 00:17:56.481 17.636 - 17.730: 99.3712% ( 1) 00:17:56.481 17.730 - 17.825: 99.3790% ( 1) 00:17:56.481 18.015 - 18.110: 99.3867% ( 1) 00:17:56.481 27.686 - 27.876: 99.3945% ( 1) 00:17:56.481 1037.653 - 1043.721: 99.4023% ( 1) 00:17:56.481 3980.705 - 4004.978: 99.9379% ( 69) 00:17:56.481 4004.978 - 4029.250: 99.9922% ( 7) 00:17:56.481 6990.507 - 7039.052: 100.0000% ( 1) 00:17:56.481 00:17:56.481 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:56.481 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:56.481 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:56.481 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:56.481 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:56.481 [ 00:17:56.481 { 00:17:56.481 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:56.481 "subtype": "Discovery", 00:17:56.481 "listen_addresses": [], 00:17:56.481 "allow_any_host": true, 00:17:56.481 "hosts": [] 00:17:56.481 }, 00:17:56.481 { 00:17:56.481 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:56.481 "subtype": "NVMe", 00:17:56.481 "listen_addresses": [ 00:17:56.481 { 00:17:56.481 "trtype": "VFIOUSER", 00:17:56.481 "adrfam": "IPv4", 00:17:56.481 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:56.481 "trsvcid": "0" 00:17:56.481 } 00:17:56.481 ], 00:17:56.481 "allow_any_host": true, 00:17:56.481 "hosts": [], 00:17:56.481 "serial_number": "SPDK1", 00:17:56.481 "model_number": "SPDK bdev Controller", 00:17:56.481 "max_namespaces": 32, 00:17:56.481 "min_cntlid": 1, 00:17:56.481 "max_cntlid": 65519, 00:17:56.481 "namespaces": [ 00:17:56.481 { 00:17:56.481 "nsid": 1, 00:17:56.481 "bdev_name": "Malloc1", 00:17:56.481 "name": "Malloc1", 00:17:56.481 "nguid": "FA08637045EA448EAC72CB55382669DC", 00:17:56.481 "uuid": "fa086370-45ea-448e-ac72-cb55382669dc" 00:17:56.481 } 00:17:56.481 ] 00:17:56.481 }, 00:17:56.481 { 00:17:56.481 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:56.481 "subtype": "NVMe", 00:17:56.481 "listen_addresses": [ 00:17:56.481 { 00:17:56.481 "trtype": "VFIOUSER", 00:17:56.481 "adrfam": "IPv4", 00:17:56.481 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:56.481 "trsvcid": "0" 00:17:56.481 } 00:17:56.481 ], 00:17:56.481 "allow_any_host": true, 00:17:56.481 "hosts": [], 00:17:56.481 "serial_number": "SPDK2", 00:17:56.481 "model_number": "SPDK bdev Controller", 00:17:56.481 "max_namespaces": 32, 00:17:56.481 "min_cntlid": 1, 00:17:56.481 "max_cntlid": 65519, 00:17:56.481 "namespaces": [ 00:17:56.481 { 00:17:56.481 "nsid": 1, 00:17:56.481 "bdev_name": "Malloc2", 00:17:56.481 "name": "Malloc2", 00:17:56.481 "nguid": "45792A2F1BDC465FBF7F85F1D8347315", 00:17:56.481 "uuid": "45792a2f-1bdc-465f-bf7f-85f1d8347315" 00:17:56.481 } 00:17:56.481 ] 00:17:56.481 } 00:17:56.481 ] 00:17:56.481 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:56.481 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=231080 00:17:56.481 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:56.481 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:56.481 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:56.481 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:56.481 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:17:56.481 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:17:56.481 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:56.739 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:56.739 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:17:56.739 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:17:56.739 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:56.739 [2024-11-19 02:59:07.182171] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:56.739 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:56.739 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:56.739 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:56.739 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:56.739 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:56.998 Malloc3 00:17:56.998 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:57.256 [2024-11-19 02:59:07.786675] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:57.256 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:57.256 Asynchronous Event Request test 00:17:57.256 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:57.256 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:57.256 Registering asynchronous event callbacks... 00:17:57.256 Starting namespace attribute notice tests for all controllers... 00:17:57.256 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:57.256 aer_cb - Changed Namespace 00:17:57.256 Cleaning up... 00:17:57.514 [ 00:17:57.514 { 00:17:57.514 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:57.514 "subtype": "Discovery", 00:17:57.514 "listen_addresses": [], 00:17:57.514 "allow_any_host": true, 00:17:57.514 "hosts": [] 00:17:57.514 }, 00:17:57.514 { 00:17:57.514 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:57.514 "subtype": "NVMe", 00:17:57.514 "listen_addresses": [ 00:17:57.514 { 00:17:57.514 "trtype": "VFIOUSER", 00:17:57.514 "adrfam": "IPv4", 00:17:57.514 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:57.514 "trsvcid": "0" 00:17:57.514 } 00:17:57.514 ], 00:17:57.514 "allow_any_host": true, 00:17:57.514 "hosts": [], 00:17:57.514 "serial_number": "SPDK1", 00:17:57.514 "model_number": "SPDK bdev Controller", 00:17:57.514 "max_namespaces": 32, 00:17:57.514 "min_cntlid": 1, 00:17:57.514 "max_cntlid": 65519, 00:17:57.514 "namespaces": [ 00:17:57.514 { 00:17:57.514 "nsid": 1, 00:17:57.514 "bdev_name": "Malloc1", 00:17:57.514 "name": "Malloc1", 00:17:57.514 "nguid": "FA08637045EA448EAC72CB55382669DC", 00:17:57.514 "uuid": "fa086370-45ea-448e-ac72-cb55382669dc" 00:17:57.514 }, 00:17:57.514 { 00:17:57.514 "nsid": 2, 00:17:57.514 "bdev_name": "Malloc3", 00:17:57.514 "name": "Malloc3", 00:17:57.514 "nguid": "26F1C2FF09F14FAD8C300A249B17C561", 00:17:57.514 "uuid": "26f1c2ff-09f1-4fad-8c30-0a249b17c561" 00:17:57.514 } 00:17:57.514 ] 00:17:57.514 }, 00:17:57.514 { 00:17:57.514 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:57.514 "subtype": "NVMe", 00:17:57.514 "listen_addresses": [ 00:17:57.514 { 00:17:57.514 "trtype": "VFIOUSER", 00:17:57.514 "adrfam": "IPv4", 00:17:57.514 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:57.514 "trsvcid": "0" 00:17:57.514 } 00:17:57.514 ], 00:17:57.514 "allow_any_host": true, 00:17:57.514 "hosts": [], 00:17:57.514 "serial_number": "SPDK2", 00:17:57.514 "model_number": "SPDK bdev Controller", 00:17:57.514 "max_namespaces": 32, 00:17:57.514 "min_cntlid": 1, 00:17:57.514 "max_cntlid": 65519, 00:17:57.514 "namespaces": [ 00:17:57.514 { 00:17:57.514 "nsid": 1, 00:17:57.514 "bdev_name": "Malloc2", 00:17:57.514 "name": "Malloc2", 00:17:57.514 "nguid": "45792A2F1BDC465FBF7F85F1D8347315", 00:17:57.514 "uuid": "45792a2f-1bdc-465f-bf7f-85f1d8347315" 00:17:57.514 } 00:17:57.514 ] 00:17:57.514 } 00:17:57.514 ] 00:17:57.514 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 231080 00:17:57.514 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:57.514 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:57.514 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:57.514 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:57.514 [2024-11-19 02:59:08.083257] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:17:57.514 [2024-11-19 02:59:08.083300] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231221 ] 00:17:57.774 [2024-11-19 02:59:08.133603] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:57.774 [2024-11-19 02:59:08.138963] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:57.774 [2024-11-19 02:59:08.139009] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb554d08000 00:17:57.774 [2024-11-19 02:59:08.139964] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:57.774 [2024-11-19 02:59:08.140971] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:57.774 [2024-11-19 02:59:08.141993] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:57.774 [2024-11-19 02:59:08.142996] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:57.774 [2024-11-19 02:59:08.144000] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:57.774 [2024-11-19 02:59:08.145012] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:57.775 [2024-11-19 02:59:08.146002] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:57.775 [2024-11-19 02:59:08.147021] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:57.775 [2024-11-19 02:59:08.148035] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:57.775 [2024-11-19 02:59:08.148056] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb5531f5000 00:17:57.775 [2024-11-19 02:59:08.149187] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:57.775 [2024-11-19 02:59:08.163903] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:57.775 [2024-11-19 02:59:08.163943] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:17:57.775 [2024-11-19 02:59:08.166051] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:57.775 [2024-11-19 02:59:08.166111] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:57.775 [2024-11-19 02:59:08.166201] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:17:57.775 [2024-11-19 02:59:08.166224] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:17:57.775 [2024-11-19 02:59:08.166234] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:17:57.775 [2024-11-19 02:59:08.167063] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:57.775 [2024-11-19 02:59:08.167085] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:17:57.775 [2024-11-19 02:59:08.167098] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:17:57.775 [2024-11-19 02:59:08.168062] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:57.775 [2024-11-19 02:59:08.168084] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:17:57.775 [2024-11-19 02:59:08.168097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:57.775 [2024-11-19 02:59:08.169074] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:57.775 [2024-11-19 02:59:08.169094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:57.775 [2024-11-19 02:59:08.170081] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:57.775 [2024-11-19 02:59:08.170102] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:57.775 [2024-11-19 02:59:08.170110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:57.775 [2024-11-19 02:59:08.170122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:57.775 [2024-11-19 02:59:08.170232] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:17:57.775 [2024-11-19 02:59:08.170244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:57.775 [2024-11-19 02:59:08.170253] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:57.775 [2024-11-19 02:59:08.171087] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:57.775 [2024-11-19 02:59:08.172090] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:57.775 [2024-11-19 02:59:08.173094] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:57.775 [2024-11-19 02:59:08.174087] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:57.775 [2024-11-19 02:59:08.174153] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:57.775 [2024-11-19 02:59:08.175107] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:57.775 [2024-11-19 02:59:08.175128] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:57.775 [2024-11-19 02:59:08.175138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:57.775 [2024-11-19 02:59:08.175162] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:17:57.775 [2024-11-19 02:59:08.175176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:57.775 [2024-11-19 02:59:08.175199] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:57.775 [2024-11-19 02:59:08.175208] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:57.775 [2024-11-19 02:59:08.175230] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.775 [2024-11-19 02:59:08.175251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:57.775 [2024-11-19 02:59:08.181720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:57.775 [2024-11-19 02:59:08.181745] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:17:57.775 [2024-11-19 02:59:08.181755] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:17:57.775 [2024-11-19 02:59:08.181762] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:17:57.775 [2024-11-19 02:59:08.181772] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:57.775 [2024-11-19 02:59:08.181784] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:17:57.775 [2024-11-19 02:59:08.181794] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:17:57.775 [2024-11-19 02:59:08.181801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:17:57.775 [2024-11-19 02:59:08.181818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:57.775 [2024-11-19 02:59:08.181838] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:57.775 [2024-11-19 02:59:08.189702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:57.775 [2024-11-19 02:59:08.189726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.775 [2024-11-19 02:59:08.189740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.775 [2024-11-19 02:59:08.189752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.775 [2024-11-19 02:59:08.189765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.775 [2024-11-19 02:59:08.189773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:57.775 [2024-11-19 02:59:08.189786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:57.775 [2024-11-19 02:59:08.189800] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:57.775 [2024-11-19 02:59:08.197698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:57.775 [2024-11-19 02:59:08.197722] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:17:57.775 [2024-11-19 02:59:08.197733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:57.775 [2024-11-19 02:59:08.197746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:17:57.775 [2024-11-19 02:59:08.197757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:57.775 [2024-11-19 02:59:08.197771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:57.775 [2024-11-19 02:59:08.205701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:57.775 [2024-11-19 02:59:08.205779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:17:57.775 [2024-11-19 02:59:08.205796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:57.775 [2024-11-19 02:59:08.205809] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:57.775 [2024-11-19 02:59:08.205817] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:57.775 [2024-11-19 02:59:08.205823] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.775 [2024-11-19 02:59:08.205833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:57.775 [2024-11-19 02:59:08.213700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:57.775 [2024-11-19 02:59:08.213724] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:17:57.775 [2024-11-19 02:59:08.213746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:17:57.775 [2024-11-19 02:59:08.213766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:57.775 [2024-11-19 02:59:08.213780] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:57.776 [2024-11-19 02:59:08.213788] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:57.776 [2024-11-19 02:59:08.213794] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.776 [2024-11-19 02:59:08.213804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:57.776 [2024-11-19 02:59:08.221700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:57.776 [2024-11-19 02:59:08.221731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:57.776 [2024-11-19 02:59:08.221749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:57.776 [2024-11-19 02:59:08.221762] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:57.776 [2024-11-19 02:59:08.221770] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:57.776 [2024-11-19 02:59:08.221776] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.776 [2024-11-19 02:59:08.221786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:57.776 [2024-11-19 02:59:08.229701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:57.776 [2024-11-19 02:59:08.229724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:57.776 [2024-11-19 02:59:08.229737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:57.776 [2024-11-19 02:59:08.229752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:17:57.776 [2024-11-19 02:59:08.229764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:57.776 [2024-11-19 02:59:08.229773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:57.776 [2024-11-19 02:59:08.229782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:17:57.776 [2024-11-19 02:59:08.229790] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:57.776 [2024-11-19 02:59:08.229798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:17:57.776 [2024-11-19 02:59:08.229806] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:17:57.776 [2024-11-19 02:59:08.229833] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:57.776 [2024-11-19 02:59:08.237698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:57.776 [2024-11-19 02:59:08.237724] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:57.776 [2024-11-19 02:59:08.245701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:57.776 [2024-11-19 02:59:08.245726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:57.776 [2024-11-19 02:59:08.253714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:57.776 [2024-11-19 02:59:08.253740] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:57.776 [2024-11-19 02:59:08.261699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:57.776 [2024-11-19 02:59:08.261730] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:57.776 [2024-11-19 02:59:08.261741] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:57.776 [2024-11-19 02:59:08.261748] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:57.776 [2024-11-19 02:59:08.261754] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:57.776 [2024-11-19 02:59:08.261759] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:57.776 [2024-11-19 02:59:08.261769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:57.776 [2024-11-19 02:59:08.261780] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:57.776 [2024-11-19 02:59:08.261788] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:57.776 [2024-11-19 02:59:08.261794] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.776 [2024-11-19 02:59:08.261803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:57.776 [2024-11-19 02:59:08.261814] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:57.776 [2024-11-19 02:59:08.261822] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:57.776 [2024-11-19 02:59:08.261828] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.776 [2024-11-19 02:59:08.261837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:57.776 [2024-11-19 02:59:08.261849] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:57.776 [2024-11-19 02:59:08.261857] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:57.776 [2024-11-19 02:59:08.261862] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.776 [2024-11-19 02:59:08.261871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:57.776 [2024-11-19 02:59:08.269700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:57.776 [2024-11-19 02:59:08.269728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:57.776 [2024-11-19 02:59:08.269745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:57.776 [2024-11-19 02:59:08.269758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:57.776 ===================================================== 00:17:57.776 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:57.776 ===================================================== 00:17:57.776 Controller Capabilities/Features 00:17:57.776 ================================ 00:17:57.776 Vendor ID: 4e58 00:17:57.776 Subsystem Vendor ID: 4e58 00:17:57.776 Serial Number: SPDK2 00:17:57.776 Model Number: SPDK bdev Controller 00:17:57.776 Firmware Version: 25.01 00:17:57.776 Recommended Arb Burst: 6 00:17:57.776 IEEE OUI Identifier: 8d 6b 50 00:17:57.776 Multi-path I/O 00:17:57.776 May have multiple subsystem ports: Yes 00:17:57.776 May have multiple controllers: Yes 00:17:57.776 Associated with SR-IOV VF: No 00:17:57.776 Max Data Transfer Size: 131072 00:17:57.776 Max Number of Namespaces: 32 00:17:57.776 Max Number of I/O Queues: 127 00:17:57.776 NVMe Specification Version (VS): 1.3 00:17:57.776 NVMe Specification Version (Identify): 1.3 00:17:57.776 Maximum Queue Entries: 256 00:17:57.776 Contiguous Queues Required: Yes 00:17:57.776 Arbitration Mechanisms Supported 00:17:57.776 Weighted Round Robin: Not Supported 00:17:57.776 Vendor Specific: Not Supported 00:17:57.776 Reset Timeout: 15000 ms 00:17:57.776 Doorbell Stride: 4 bytes 00:17:57.776 NVM Subsystem Reset: Not Supported 00:17:57.776 Command Sets Supported 00:17:57.776 NVM Command Set: Supported 00:17:57.776 Boot Partition: Not Supported 00:17:57.776 Memory Page Size Minimum: 4096 bytes 00:17:57.776 Memory Page Size Maximum: 4096 bytes 00:17:57.776 Persistent Memory Region: Not Supported 00:17:57.776 Optional Asynchronous Events Supported 00:17:57.776 Namespace Attribute Notices: Supported 00:17:57.776 Firmware Activation Notices: Not Supported 00:17:57.776 ANA Change Notices: Not Supported 00:17:57.776 PLE Aggregate Log Change Notices: Not Supported 00:17:57.776 LBA Status Info Alert Notices: Not Supported 00:17:57.776 EGE Aggregate Log Change Notices: Not Supported 00:17:57.776 Normal NVM Subsystem Shutdown event: Not Supported 00:17:57.776 Zone Descriptor Change Notices: Not Supported 00:17:57.776 Discovery Log Change Notices: Not Supported 00:17:57.776 Controller Attributes 00:17:57.776 128-bit Host Identifier: Supported 00:17:57.776 Non-Operational Permissive Mode: Not Supported 00:17:57.776 NVM Sets: Not Supported 00:17:57.776 Read Recovery Levels: Not Supported 00:17:57.776 Endurance Groups: Not Supported 00:17:57.776 Predictable Latency Mode: Not Supported 00:17:57.776 Traffic Based Keep ALive: Not Supported 00:17:57.776 Namespace Granularity: Not Supported 00:17:57.776 SQ Associations: Not Supported 00:17:57.776 UUID List: Not Supported 00:17:57.776 Multi-Domain Subsystem: Not Supported 00:17:57.776 Fixed Capacity Management: Not Supported 00:17:57.776 Variable Capacity Management: Not Supported 00:17:57.776 Delete Endurance Group: Not Supported 00:17:57.776 Delete NVM Set: Not Supported 00:17:57.776 Extended LBA Formats Supported: Not Supported 00:17:57.776 Flexible Data Placement Supported: Not Supported 00:17:57.776 00:17:57.776 Controller Memory Buffer Support 00:17:57.776 ================================ 00:17:57.776 Supported: No 00:17:57.776 00:17:57.776 Persistent Memory Region Support 00:17:57.776 ================================ 00:17:57.776 Supported: No 00:17:57.776 00:17:57.776 Admin Command Set Attributes 00:17:57.776 ============================ 00:17:57.777 Security Send/Receive: Not Supported 00:17:57.777 Format NVM: Not Supported 00:17:57.777 Firmware Activate/Download: Not Supported 00:17:57.777 Namespace Management: Not Supported 00:17:57.777 Device Self-Test: Not Supported 00:17:57.777 Directives: Not Supported 00:17:57.777 NVMe-MI: Not Supported 00:17:57.777 Virtualization Management: Not Supported 00:17:57.777 Doorbell Buffer Config: Not Supported 00:17:57.777 Get LBA Status Capability: Not Supported 00:17:57.777 Command & Feature Lockdown Capability: Not Supported 00:17:57.777 Abort Command Limit: 4 00:17:57.777 Async Event Request Limit: 4 00:17:57.777 Number of Firmware Slots: N/A 00:17:57.777 Firmware Slot 1 Read-Only: N/A 00:17:57.777 Firmware Activation Without Reset: N/A 00:17:57.777 Multiple Update Detection Support: N/A 00:17:57.777 Firmware Update Granularity: No Information Provided 00:17:57.777 Per-Namespace SMART Log: No 00:17:57.777 Asymmetric Namespace Access Log Page: Not Supported 00:17:57.777 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:57.777 Command Effects Log Page: Supported 00:17:57.777 Get Log Page Extended Data: Supported 00:17:57.777 Telemetry Log Pages: Not Supported 00:17:57.777 Persistent Event Log Pages: Not Supported 00:17:57.777 Supported Log Pages Log Page: May Support 00:17:57.777 Commands Supported & Effects Log Page: Not Supported 00:17:57.777 Feature Identifiers & Effects Log Page:May Support 00:17:57.777 NVMe-MI Commands & Effects Log Page: May Support 00:17:57.777 Data Area 4 for Telemetry Log: Not Supported 00:17:57.777 Error Log Page Entries Supported: 128 00:17:57.777 Keep Alive: Supported 00:17:57.777 Keep Alive Granularity: 10000 ms 00:17:57.777 00:17:57.777 NVM Command Set Attributes 00:17:57.777 ========================== 00:17:57.777 Submission Queue Entry Size 00:17:57.777 Max: 64 00:17:57.777 Min: 64 00:17:57.777 Completion Queue Entry Size 00:17:57.777 Max: 16 00:17:57.777 Min: 16 00:17:57.777 Number of Namespaces: 32 00:17:57.777 Compare Command: Supported 00:17:57.777 Write Uncorrectable Command: Not Supported 00:17:57.777 Dataset Management Command: Supported 00:17:57.777 Write Zeroes Command: Supported 00:17:57.777 Set Features Save Field: Not Supported 00:17:57.777 Reservations: Not Supported 00:17:57.777 Timestamp: Not Supported 00:17:57.777 Copy: Supported 00:17:57.777 Volatile Write Cache: Present 00:17:57.777 Atomic Write Unit (Normal): 1 00:17:57.777 Atomic Write Unit (PFail): 1 00:17:57.777 Atomic Compare & Write Unit: 1 00:17:57.777 Fused Compare & Write: Supported 00:17:57.777 Scatter-Gather List 00:17:57.777 SGL Command Set: Supported (Dword aligned) 00:17:57.777 SGL Keyed: Not Supported 00:17:57.777 SGL Bit Bucket Descriptor: Not Supported 00:17:57.777 SGL Metadata Pointer: Not Supported 00:17:57.777 Oversized SGL: Not Supported 00:17:57.777 SGL Metadata Address: Not Supported 00:17:57.777 SGL Offset: Not Supported 00:17:57.777 Transport SGL Data Block: Not Supported 00:17:57.777 Replay Protected Memory Block: Not Supported 00:17:57.777 00:17:57.777 Firmware Slot Information 00:17:57.777 ========================= 00:17:57.777 Active slot: 1 00:17:57.777 Slot 1 Firmware Revision: 25.01 00:17:57.777 00:17:57.777 00:17:57.777 Commands Supported and Effects 00:17:57.777 ============================== 00:17:57.777 Admin Commands 00:17:57.777 -------------- 00:17:57.777 Get Log Page (02h): Supported 00:17:57.777 Identify (06h): Supported 00:17:57.777 Abort (08h): Supported 00:17:57.777 Set Features (09h): Supported 00:17:57.777 Get Features (0Ah): Supported 00:17:57.777 Asynchronous Event Request (0Ch): Supported 00:17:57.777 Keep Alive (18h): Supported 00:17:57.777 I/O Commands 00:17:57.777 ------------ 00:17:57.777 Flush (00h): Supported LBA-Change 00:17:57.777 Write (01h): Supported LBA-Change 00:17:57.777 Read (02h): Supported 00:17:57.777 Compare (05h): Supported 00:17:57.777 Write Zeroes (08h): Supported LBA-Change 00:17:57.777 Dataset Management (09h): Supported LBA-Change 00:17:57.777 Copy (19h): Supported LBA-Change 00:17:57.777 00:17:57.777 Error Log 00:17:57.777 ========= 00:17:57.777 00:17:57.777 Arbitration 00:17:57.777 =========== 00:17:57.777 Arbitration Burst: 1 00:17:57.777 00:17:57.777 Power Management 00:17:57.777 ================ 00:17:57.777 Number of Power States: 1 00:17:57.777 Current Power State: Power State #0 00:17:57.777 Power State #0: 00:17:57.777 Max Power: 0.00 W 00:17:57.777 Non-Operational State: Operational 00:17:57.777 Entry Latency: Not Reported 00:17:57.777 Exit Latency: Not Reported 00:17:57.777 Relative Read Throughput: 0 00:17:57.777 Relative Read Latency: 0 00:17:57.777 Relative Write Throughput: 0 00:17:57.777 Relative Write Latency: 0 00:17:57.777 Idle Power: Not Reported 00:17:57.777 Active Power: Not Reported 00:17:57.777 Non-Operational Permissive Mode: Not Supported 00:17:57.777 00:17:57.777 Health Information 00:17:57.777 ================== 00:17:57.777 Critical Warnings: 00:17:57.777 Available Spare Space: OK 00:17:57.777 Temperature: OK 00:17:57.777 Device Reliability: OK 00:17:57.777 Read Only: No 00:17:57.777 Volatile Memory Backup: OK 00:17:57.777 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:57.777 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:57.777 Available Spare: 0% 00:17:57.777 Available Sp[2024-11-19 02:59:08.269878] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:57.777 [2024-11-19 02:59:08.277703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:57.777 [2024-11-19 02:59:08.277767] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:17:57.777 [2024-11-19 02:59:08.277785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.777 [2024-11-19 02:59:08.277797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.777 [2024-11-19 02:59:08.277806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.777 [2024-11-19 02:59:08.277816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.777 [2024-11-19 02:59:08.277903] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:57.777 [2024-11-19 02:59:08.277924] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:57.777 [2024-11-19 02:59:08.278902] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:57.777 [2024-11-19 02:59:08.278982] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:17:57.777 [2024-11-19 02:59:08.279010] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:17:57.777 [2024-11-19 02:59:08.281702] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:57.777 [2024-11-19 02:59:08.281727] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 2 milliseconds 00:17:57.777 [2024-11-19 02:59:08.281778] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:57.777 [2024-11-19 02:59:08.283001] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:57.777 are Threshold: 0% 00:17:57.777 Life Percentage Used: 0% 00:17:57.777 Data Units Read: 0 00:17:57.777 Data Units Written: 0 00:17:57.777 Host Read Commands: 0 00:17:57.777 Host Write Commands: 0 00:17:57.777 Controller Busy Time: 0 minutes 00:17:57.777 Power Cycles: 0 00:17:57.777 Power On Hours: 0 hours 00:17:57.777 Unsafe Shutdowns: 0 00:17:57.777 Unrecoverable Media Errors: 0 00:17:57.777 Lifetime Error Log Entries: 0 00:17:57.777 Warning Temperature Time: 0 minutes 00:17:57.777 Critical Temperature Time: 0 minutes 00:17:57.777 00:17:57.777 Number of Queues 00:17:57.777 ================ 00:17:57.777 Number of I/O Submission Queues: 127 00:17:57.777 Number of I/O Completion Queues: 127 00:17:57.777 00:17:57.777 Active Namespaces 00:17:57.777 ================= 00:17:57.777 Namespace ID:1 00:17:57.777 Error Recovery Timeout: Unlimited 00:17:57.777 Command Set Identifier: NVM (00h) 00:17:57.777 Deallocate: Supported 00:17:57.777 Deallocated/Unwritten Error: Not Supported 00:17:57.777 Deallocated Read Value: Unknown 00:17:57.777 Deallocate in Write Zeroes: Not Supported 00:17:57.777 Deallocated Guard Field: 0xFFFF 00:17:57.777 Flush: Supported 00:17:57.777 Reservation: Supported 00:17:57.777 Namespace Sharing Capabilities: Multiple Controllers 00:17:57.777 Size (in LBAs): 131072 (0GiB) 00:17:57.777 Capacity (in LBAs): 131072 (0GiB) 00:17:57.777 Utilization (in LBAs): 131072 (0GiB) 00:17:57.777 NGUID: 45792A2F1BDC465FBF7F85F1D8347315 00:17:57.777 UUID: 45792a2f-1bdc-465f-bf7f-85f1d8347315 00:17:57.778 Thin Provisioning: Not Supported 00:17:57.778 Per-NS Atomic Units: Yes 00:17:57.778 Atomic Boundary Size (Normal): 0 00:17:57.778 Atomic Boundary Size (PFail): 0 00:17:57.778 Atomic Boundary Offset: 0 00:17:57.778 Maximum Single Source Range Length: 65535 00:17:57.778 Maximum Copy Length: 65535 00:17:57.778 Maximum Source Range Count: 1 00:17:57.778 NGUID/EUI64 Never Reused: No 00:17:57.778 Namespace Write Protected: No 00:17:57.778 Number of LBA Formats: 1 00:17:57.778 Current LBA Format: LBA Format #00 00:17:57.778 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:57.778 00:17:57.778 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:58.036 [2024-11-19 02:59:08.523550] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:03.305 Initializing NVMe Controllers 00:18:03.305 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:03.305 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:03.305 Initialization complete. Launching workers. 00:18:03.305 ======================================================== 00:18:03.305 Latency(us) 00:18:03.305 Device Information : IOPS MiB/s Average min max 00:18:03.305 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34058.36 133.04 3757.66 1173.32 7407.52 00:18:03.305 ======================================================== 00:18:03.305 Total : 34058.36 133.04 3757.66 1173.32 7407.52 00:18:03.305 00:18:03.305 [2024-11-19 02:59:13.628078] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:03.305 02:59:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:03.305 [2024-11-19 02:59:13.883783] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:08.575 Initializing NVMe Controllers 00:18:08.575 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:08.575 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:08.575 Initialization complete. Launching workers. 00:18:08.575 ======================================================== 00:18:08.575 Latency(us) 00:18:08.575 Device Information : IOPS MiB/s Average min max 00:18:08.575 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31361.36 122.51 4081.53 1219.07 9568.56 00:18:08.575 ======================================================== 00:18:08.575 Total : 31361.36 122.51 4081.53 1219.07 9568.56 00:18:08.575 00:18:08.575 [2024-11-19 02:59:18.910112] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:08.575 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:08.575 [2024-11-19 02:59:19.140015] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:13.843 [2024-11-19 02:59:24.285814] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:13.843 Initializing NVMe Controllers 00:18:13.843 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:13.843 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:13.843 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:13.843 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:13.843 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:13.843 Initialization complete. Launching workers. 00:18:13.843 Starting thread on core 2 00:18:13.843 Starting thread on core 3 00:18:13.843 Starting thread on core 1 00:18:13.843 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:14.102 [2024-11-19 02:59:24.607217] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:17.389 [2024-11-19 02:59:27.685108] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:17.389 Initializing NVMe Controllers 00:18:17.389 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:17.389 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:17.389 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:17.389 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:17.389 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:17.389 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:17.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:17.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:17.389 Initialization complete. Launching workers. 00:18:17.389 Starting thread on core 1 with urgent priority queue 00:18:17.389 Starting thread on core 2 with urgent priority queue 00:18:17.389 Starting thread on core 3 with urgent priority queue 00:18:17.389 Starting thread on core 0 with urgent priority queue 00:18:17.389 SPDK bdev Controller (SPDK2 ) core 0: 5160.00 IO/s 19.38 secs/100000 ios 00:18:17.389 SPDK bdev Controller (SPDK2 ) core 1: 5185.00 IO/s 19.29 secs/100000 ios 00:18:17.389 SPDK bdev Controller (SPDK2 ) core 2: 5156.00 IO/s 19.39 secs/100000 ios 00:18:17.389 SPDK bdev Controller (SPDK2 ) core 3: 5512.67 IO/s 18.14 secs/100000 ios 00:18:17.389 ======================================================== 00:18:17.389 00:18:17.389 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:17.389 [2024-11-19 02:59:27.993154] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:17.389 Initializing NVMe Controllers 00:18:17.389 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:17.389 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:17.389 Namespace ID: 1 size: 0GB 00:18:17.389 Initialization complete. 00:18:17.389 INFO: using host memory buffer for IO 00:18:17.389 Hello world! 00:18:17.389 [2024-11-19 02:59:28.007243] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:17.648 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:17.906 [2024-11-19 02:59:28.304409] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:18.840 Initializing NVMe Controllers 00:18:18.840 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:18.840 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:18.840 Initialization complete. Launching workers. 00:18:18.840 submit (in ns) avg, min, max = 10371.6, 3488.9, 4031433.3 00:18:18.840 complete (in ns) avg, min, max = 23591.9, 2048.9, 4015471.1 00:18:18.840 00:18:18.840 Submit histogram 00:18:18.840 ================ 00:18:18.840 Range in us Cumulative Count 00:18:18.840 3.484 - 3.508: 0.4954% ( 64) 00:18:18.840 3.508 - 3.532: 1.3469% ( 110) 00:18:18.840 3.532 - 3.556: 3.9245% ( 333) 00:18:18.840 3.556 - 3.579: 8.5223% ( 594) 00:18:18.841 3.579 - 3.603: 15.9610% ( 961) 00:18:18.841 3.603 - 3.627: 24.7542% ( 1136) 00:18:18.841 3.627 - 3.650: 33.3230% ( 1107) 00:18:18.841 3.650 - 3.674: 40.5682% ( 936) 00:18:18.841 3.674 - 3.698: 47.1631% ( 852) 00:18:18.841 3.698 - 3.721: 53.2781% ( 790) 00:18:18.841 3.721 - 3.745: 57.6980% ( 571) 00:18:18.841 3.745 - 3.769: 61.4366% ( 483) 00:18:18.841 3.769 - 3.793: 64.5406% ( 401) 00:18:18.841 3.793 - 3.816: 68.1941% ( 472) 00:18:18.841 3.816 - 3.840: 72.0876% ( 503) 00:18:18.841 3.840 - 3.864: 76.4455% ( 563) 00:18:18.841 3.864 - 3.887: 80.2771% ( 495) 00:18:18.841 3.887 - 3.911: 83.3733% ( 400) 00:18:18.841 3.911 - 3.935: 85.9432% ( 332) 00:18:18.841 3.935 - 3.959: 87.9248% ( 256) 00:18:18.841 3.959 - 3.982: 89.6354% ( 221) 00:18:18.841 3.982 - 4.006: 90.8739% ( 160) 00:18:18.841 4.006 - 4.030: 91.9808% ( 143) 00:18:18.841 4.030 - 4.053: 93.0645% ( 140) 00:18:18.841 4.053 - 4.077: 93.8385% ( 100) 00:18:18.841 4.077 - 4.101: 94.4346% ( 77) 00:18:18.841 4.101 - 4.124: 95.1699% ( 95) 00:18:18.841 4.124 - 4.148: 95.7195% ( 71) 00:18:18.841 4.148 - 4.172: 96.1065% ( 50) 00:18:18.841 4.172 - 4.196: 96.3542% ( 32) 00:18:18.841 4.196 - 4.219: 96.5477% ( 25) 00:18:18.841 4.219 - 4.243: 96.7180% ( 22) 00:18:18.841 4.243 - 4.267: 96.8883% ( 22) 00:18:18.841 4.267 - 4.290: 97.0122% ( 16) 00:18:18.841 4.290 - 4.314: 97.1360% ( 16) 00:18:18.841 4.314 - 4.338: 97.1902% ( 7) 00:18:18.841 4.338 - 4.361: 97.2908% ( 13) 00:18:18.841 4.361 - 4.385: 97.3295% ( 5) 00:18:18.841 4.385 - 4.409: 97.3682% ( 5) 00:18:18.841 4.409 - 4.433: 97.3992% ( 4) 00:18:18.841 4.456 - 4.480: 97.4301% ( 4) 00:18:18.841 4.480 - 4.504: 97.4379% ( 1) 00:18:18.841 4.504 - 4.527: 97.4456% ( 1) 00:18:18.841 4.551 - 4.575: 97.4534% ( 1) 00:18:18.841 4.575 - 4.599: 97.4611% ( 1) 00:18:18.841 4.646 - 4.670: 97.4843% ( 3) 00:18:18.841 4.670 - 4.693: 97.5075% ( 3) 00:18:18.841 4.693 - 4.717: 97.5230% ( 2) 00:18:18.841 4.717 - 4.741: 97.5927% ( 9) 00:18:18.841 4.741 - 4.764: 97.6469% ( 7) 00:18:18.841 4.764 - 4.788: 97.7011% ( 7) 00:18:18.841 4.788 - 4.812: 97.7552% ( 7) 00:18:18.841 4.812 - 4.836: 97.7939% ( 5) 00:18:18.841 4.836 - 4.859: 97.8326% ( 5) 00:18:18.841 4.859 - 4.883: 97.8636% ( 4) 00:18:18.841 4.883 - 4.907: 97.9023% ( 5) 00:18:18.841 4.907 - 4.930: 97.9642% ( 8) 00:18:18.841 4.930 - 4.954: 97.9875% ( 3) 00:18:18.841 4.954 - 4.978: 98.0416% ( 7) 00:18:18.841 4.978 - 5.001: 98.0649% ( 3) 00:18:18.841 5.025 - 5.049: 98.0881% ( 3) 00:18:18.841 5.049 - 5.073: 98.1423% ( 7) 00:18:18.841 5.073 - 5.096: 98.1732% ( 4) 00:18:18.841 5.096 - 5.120: 98.1887% ( 2) 00:18:18.841 5.144 - 5.167: 98.2042% ( 2) 00:18:18.841 5.167 - 5.191: 98.2197% ( 2) 00:18:18.841 5.191 - 5.215: 98.2429% ( 3) 00:18:18.841 5.215 - 5.239: 98.2506% ( 1) 00:18:18.841 5.239 - 5.262: 98.2584% ( 1) 00:18:18.841 5.262 - 5.286: 98.2661% ( 1) 00:18:18.841 5.286 - 5.310: 98.2816% ( 2) 00:18:18.841 5.476 - 5.499: 98.2893% ( 1) 00:18:18.841 5.547 - 5.570: 98.2971% ( 1) 00:18:18.841 5.594 - 5.618: 98.3048% ( 1) 00:18:18.841 5.618 - 5.641: 98.3203% ( 2) 00:18:18.841 5.973 - 5.997: 98.3280% ( 1) 00:18:18.841 6.068 - 6.116: 98.3358% ( 1) 00:18:18.841 6.542 - 6.590: 98.3435% ( 1) 00:18:18.841 6.590 - 6.637: 98.3513% ( 1) 00:18:18.841 6.827 - 6.874: 98.3590% ( 1) 00:18:18.841 6.969 - 7.016: 98.3667% ( 1) 00:18:18.841 7.111 - 7.159: 98.3745% ( 1) 00:18:18.841 7.159 - 7.206: 98.3977% ( 3) 00:18:18.841 7.253 - 7.301: 98.4209% ( 3) 00:18:18.841 7.348 - 7.396: 98.4287% ( 1) 00:18:18.841 7.396 - 7.443: 98.4364% ( 1) 00:18:18.841 7.443 - 7.490: 98.4442% ( 1) 00:18:18.841 7.538 - 7.585: 98.4519% ( 1) 00:18:18.841 7.585 - 7.633: 98.4596% ( 1) 00:18:18.841 7.633 - 7.680: 98.4751% ( 2) 00:18:18.841 7.680 - 7.727: 98.4829% ( 1) 00:18:18.841 7.727 - 7.775: 98.4906% ( 1) 00:18:18.841 8.296 - 8.344: 98.5061% ( 2) 00:18:18.841 8.344 - 8.391: 98.5138% ( 1) 00:18:18.841 8.391 - 8.439: 98.5216% ( 1) 00:18:18.841 8.628 - 8.676: 98.5293% ( 1) 00:18:18.841 8.770 - 8.818: 98.5448% ( 2) 00:18:18.841 8.818 - 8.865: 98.5525% ( 1) 00:18:18.841 8.913 - 8.960: 98.5680% ( 2) 00:18:18.841 9.007 - 9.055: 98.5757% ( 1) 00:18:18.841 9.102 - 9.150: 98.5835% ( 1) 00:18:18.841 9.244 - 9.292: 98.5990% ( 2) 00:18:18.841 9.387 - 9.434: 98.6067% ( 1) 00:18:18.841 9.481 - 9.529: 98.6144% ( 1) 00:18:18.841 9.529 - 9.576: 98.6222% ( 1) 00:18:18.841 9.671 - 9.719: 98.6299% ( 1) 00:18:18.841 9.719 - 9.766: 98.6377% ( 1) 00:18:18.841 9.766 - 9.813: 98.6531% ( 2) 00:18:18.841 9.813 - 9.861: 98.6609% ( 1) 00:18:18.841 9.861 - 9.908: 98.6686% ( 1) 00:18:18.841 9.956 - 10.003: 98.6764% ( 1) 00:18:18.841 10.003 - 10.050: 98.6841% ( 1) 00:18:18.841 10.050 - 10.098: 98.6918% ( 1) 00:18:18.841 10.382 - 10.430: 98.7073% ( 2) 00:18:18.841 10.951 - 10.999: 98.7151% ( 1) 00:18:18.841 10.999 - 11.046: 98.7228% ( 1) 00:18:18.841 11.283 - 11.330: 98.7306% ( 1) 00:18:18.841 11.852 - 11.899: 98.7383% ( 1) 00:18:18.841 11.899 - 11.947: 98.7460% ( 1) 00:18:18.841 12.421 - 12.516: 98.7538% ( 1) 00:18:18.841 13.369 - 13.464: 98.7615% ( 1) 00:18:18.841 13.559 - 13.653: 98.7770% ( 2) 00:18:18.841 13.748 - 13.843: 98.7847% ( 1) 00:18:18.841 13.843 - 13.938: 98.7925% ( 1) 00:18:18.841 14.033 - 14.127: 98.8002% ( 1) 00:18:18.841 14.222 - 14.317: 98.8080% ( 1) 00:18:18.841 14.886 - 14.981: 98.8157% ( 1) 00:18:18.841 16.024 - 16.119: 98.8234% ( 1) 00:18:18.841 16.877 - 16.972: 98.8312% ( 1) 00:18:18.841 17.256 - 17.351: 98.8544% ( 3) 00:18:18.841 17.351 - 17.446: 98.8854% ( 4) 00:18:18.841 17.446 - 17.541: 98.9473% ( 8) 00:18:18.841 17.541 - 17.636: 98.9782% ( 4) 00:18:18.841 17.636 - 17.730: 99.0092% ( 4) 00:18:18.841 17.730 - 17.825: 99.0324% ( 3) 00:18:18.841 17.825 - 17.920: 99.0866% ( 7) 00:18:18.841 17.920 - 18.015: 99.1253% ( 5) 00:18:18.841 18.015 - 18.110: 99.1872% ( 8) 00:18:18.841 18.110 - 18.204: 99.2492% ( 8) 00:18:18.841 18.204 - 18.299: 99.3111% ( 8) 00:18:18.841 18.299 - 18.394: 99.4040% ( 12) 00:18:18.841 18.394 - 18.489: 99.5046% ( 13) 00:18:18.841 18.489 - 18.584: 99.5433% ( 5) 00:18:18.841 18.584 - 18.679: 99.5975% ( 7) 00:18:18.841 18.679 - 18.773: 99.6439% ( 6) 00:18:18.841 18.773 - 18.868: 99.6981% ( 7) 00:18:18.841 18.868 - 18.963: 99.7213% ( 3) 00:18:18.841 18.963 - 19.058: 99.7446% ( 3) 00:18:18.841 19.058 - 19.153: 99.7523% ( 1) 00:18:18.841 19.153 - 19.247: 99.7600% ( 1) 00:18:18.841 21.807 - 21.902: 99.7678% ( 1) 00:18:18.841 22.376 - 22.471: 99.7755% ( 1) 00:18:18.841 23.135 - 23.230: 99.7833% ( 1) 00:18:18.841 23.704 - 23.799: 99.7910% ( 1) 00:18:18.841 27.496 - 27.686: 99.8065% ( 2) 00:18:18.841 27.876 - 28.065: 99.8142% ( 1) 00:18:18.841 28.065 - 28.255: 99.8220% ( 1) 00:18:18.841 28.255 - 28.444: 99.8297% ( 1) 00:18:18.841 28.824 - 29.013: 99.8374% ( 1) 00:18:18.841 3082.619 - 3094.756: 99.8452% ( 1) 00:18:18.841 3980.705 - 4004.978: 99.9613% ( 15) 00:18:18.841 4004.978 - 4029.250: 99.9923% ( 4) 00:18:18.841 4029.250 - 4053.523: 100.0000% ( 1) 00:18:18.841 00:18:18.841 Complete histogram 00:18:18.841 ================== 00:18:18.841 Range in us Cumulative Count 00:18:18.841 2.039 - 2.050: 0.0310% ( 4) 00:18:18.841 2.050 - 2.062: 13.9949% ( 1804) 00:18:18.841 2.062 - 2.074: 46.5671% ( 4208) 00:18:18.841 2.074 - 2.086: 49.2995% ( 353) 00:18:18.841 2.086 - 2.098: 55.4687% ( 797) 00:18:18.841 2.098 - 2.110: 61.0574% ( 722) 00:18:18.841 2.110 - 2.121: 62.5126% ( 188) 00:18:18.841 2.121 - 2.133: 72.4050% ( 1278) 00:18:18.841 2.133 - 2.145: 77.4518% ( 652) 00:18:18.841 2.145 - 2.157: 78.1949% ( 96) 00:18:18.841 2.157 - 2.169: 80.4706% ( 294) 00:18:18.841 2.169 - 2.181: 81.5853% ( 144) 00:18:18.841 2.181 - 2.193: 82.2664% ( 88) 00:18:18.841 2.193 - 2.204: 86.8333% ( 590) 00:18:18.841 2.204 - 2.216: 89.6045% ( 358) 00:18:18.841 2.216 - 2.228: 91.5164% ( 247) 00:18:18.841 2.228 - 2.240: 92.8400% ( 171) 00:18:18.841 2.240 - 2.252: 93.4360% ( 77) 00:18:18.841 2.252 - 2.264: 93.6837% ( 32) 00:18:18.842 2.264 - 2.276: 93.9314% ( 32) 00:18:18.842 2.276 - 2.287: 94.5274% ( 77) 00:18:18.842 2.287 - 2.299: 95.1467% ( 80) 00:18:18.842 2.299 - 2.311: 95.3712% ( 29) 00:18:18.842 2.311 - 2.323: 95.5105% ( 18) 00:18:18.842 2.323 - 2.335: 95.5802% ( 9) 00:18:18.842 2.335 - 2.347: 95.6266% ( 6) 00:18:18.842 2.347 - 2.359: 95.7504% ( 16) 00:18:18.842 2.359 - 2.370: 96.0755% ( 42) 00:18:18.842 2.370 - 2.382: 96.4239% ( 45) 00:18:18.842 2.382 - 2.394: 96.7180% ( 38) 00:18:18.842 2.394 - 2.406: 96.8883% ( 22) 00:18:18.842 2.406 - 2.418: 97.1205% ( 30) 00:18:18.842 2.418 - 2.430: 97.2753% ( 20) 00:18:18.842 2.430 - 2.441: 97.4456% ( 22) 00:18:18.842 2.441 - 2.453: 97.5772% ( 17) 00:18:18.842 2.453 - 2.465: 97.7939% ( 28) 00:18:18.842 2.465 - 2.477: 97.9255% ( 17) 00:18:18.842 2.477 - 2.489: 98.0571% ( 17) 00:18:18.842 2.489 - 2.501: 98.1423% ( 11) 00:18:18.842 2.501 - 2.513: 98.2352% ( 12) 00:18:18.842 2.513 - 2.524: 98.2893% ( 7) 00:18:18.842 2.524 - 2.536: 98.3358% ( 6) 00:18:18.842 2.536 - 2.548: 98.3745% ( 5) 00:18:18.842 2.548 - 2.560: 98.3977% ( 3) 00:18:18.842 2.584 - 2.596: 98.4054% ( 1) 00:18:18.842 2.607 - 2.619: 98.4132% ( 1) 00:18:18.842 2.631 - 2.643: 98.4209% ( 1) 00:18:18.842 2.643 - 2.655: 98.4287% ( 1) 00:18:18.842 2.655 - 2.667: 98.4364% ( 1) 00:18:18.842 3.461 - 3.484: 98.4442% ( 1) 00:18:18.842 3.508 - 3.532: 98.4596% ( 2) 00:18:18.842 3.532 - 3.556: 98.4751% ( 2) 00:18:18.842 3.556 - 3.579: 98.4983% ( 3) 00:18:18.842 3.579 - 3.603: 98.5138% ( 2) 00:18:18.842 3.627 - 3.650: 98.5370% ( 3) 00:18:18.842 3.650 - 3.674: 98.5525% ( 2) 00:18:18.842 3.674 - 3.698: 98.5603% ( 1) 00:18:18.842 3.745 - 3.769: 98.5680% ( 1) 00:18:18.842 3.793 - 3.816: 98.5757% ( 1) 00:18:18.842 3.840 - 3.864: 98.5835% ( 1) 00:18:18.842 3.887 - 3.911: 98.5990% ( 2) 00:18:18.842 3.911 - 3.935: 98.6067% ( 1) 00:18:18.842 3.959 - 3.982: 98.6144% ( 1) 00:18:18.842 4.030 - 4.053: 98.6222% ( 1) 00:18:18.842 4.053 - 4.077: 98.6377% ( 2) 00:18:18.842 4.101 - 4.124: 98.6454% ( 1) 00:18:18.842 4.124 - 4.148: 98.6531% ( 1) 00:18:18.842 4.148 - 4.172: 98.6609% ( 1) 00:18:18.842 4.172 - 4.196: 98.6686% ( 1) 00:18:18.842 5.807 - 5.831: 98.6764% ( 1) 00:18:18.842 5.879 - 5.902: 98.6841% ( 1) 00:18:18.842 6.258 - 6.305: 98.6918% ( 1) 00:18:18.842 6.400 - 6.447: 98.6996% ( 1) 00:18:18.842 6.495 - 6.542: 98.7073% ( 1) 00:18:18.842 6.590 - 6.637: 98.7228% ( 2) 00:18:18.842 6.874 - 6.921: 98.7306% ( 1) 00:18:18.842 6.921 - 6.969: 98.7383% ( 1) 00:18:18.842 7.064 - 7.111: 98.7538% ( 2) 00:18:18.842 7.159 - 7.206: 98.7615% ( 1) 00:18:18.842 7.206 - 7.253: 98.7693% ( 1) 00:18:18.842 7.253 - 7.301: 98.7770% ( 1) 00:18:18.842 7.348 - 7.396: 98.7847% ( 1) 00:18:18.842 7.538 - 7.585: 98.7925% ( 1) 00:18:18.842 7.585 - 7.633: 98.8002% ( 1) 00:18:18.842 7.727 - 7.775: 98.8080% ( 1) 00:18:18.842 7.822 - 7.870: 98.8157% ( 1) 00:18:18.842 7.917 - 7.964: 98.8234% ( 1) 00:18:18.842 8.770 - 8.818: 98.8312% ( 1) 00:18:18.842 8.960 - 9.007: 9[2024-11-19 02:59:29.398493] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:18.842 8.8389% ( 1) 00:18:18.842 10.003 - 10.050: 98.8467% ( 1) 00:18:18.842 10.619 - 10.667: 98.8544% ( 1) 00:18:18.842 13.369 - 13.464: 98.8621% ( 1) 00:18:18.842 15.550 - 15.644: 98.8699% ( 1) 00:18:18.842 15.644 - 15.739: 98.8854% ( 2) 00:18:18.842 15.739 - 15.834: 98.9163% ( 4) 00:18:18.842 15.834 - 15.929: 98.9318% ( 2) 00:18:18.842 15.929 - 16.024: 98.9473% ( 2) 00:18:18.842 16.024 - 16.119: 98.9705% ( 3) 00:18:18.842 16.119 - 16.213: 98.9937% ( 3) 00:18:18.842 16.213 - 16.308: 99.0092% ( 2) 00:18:18.842 16.308 - 16.403: 99.0479% ( 5) 00:18:18.842 16.403 - 16.498: 99.1021% ( 7) 00:18:18.842 16.498 - 16.593: 99.1408% ( 5) 00:18:18.842 16.593 - 16.687: 99.1718% ( 4) 00:18:18.842 16.687 - 16.782: 99.2182% ( 6) 00:18:18.842 16.782 - 16.877: 99.2569% ( 5) 00:18:18.842 16.877 - 16.972: 99.2801% ( 3) 00:18:18.842 16.972 - 17.067: 99.3111% ( 4) 00:18:18.842 17.067 - 17.161: 99.3421% ( 4) 00:18:18.842 17.161 - 17.256: 99.3730% ( 4) 00:18:18.842 17.351 - 17.446: 99.3808% ( 1) 00:18:18.842 17.541 - 17.636: 99.3885% ( 1) 00:18:18.842 17.825 - 17.920: 99.3962% ( 1) 00:18:18.842 17.920 - 18.015: 99.4117% ( 2) 00:18:18.842 18.015 - 18.110: 99.4195% ( 1) 00:18:18.842 18.204 - 18.299: 99.4272% ( 1) 00:18:18.842 18.584 - 18.679: 99.4349% ( 1) 00:18:18.842 18.679 - 18.773: 99.4427% ( 1) 00:18:18.842 19.816 - 19.911: 99.4504% ( 1) 00:18:18.842 20.575 - 20.670: 99.4582% ( 1) 00:18:18.842 24.083 - 24.178: 99.4659% ( 1) 00:18:18.842 3980.705 - 4004.978: 99.8762% ( 53) 00:18:18.842 4004.978 - 4029.250: 100.0000% ( 16) 00:18:18.842 00:18:18.842 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:18.842 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:18.842 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:18.842 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:18.842 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:19.410 [ 00:18:19.410 { 00:18:19.410 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:19.410 "subtype": "Discovery", 00:18:19.410 "listen_addresses": [], 00:18:19.410 "allow_any_host": true, 00:18:19.410 "hosts": [] 00:18:19.410 }, 00:18:19.410 { 00:18:19.410 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:19.410 "subtype": "NVMe", 00:18:19.410 "listen_addresses": [ 00:18:19.410 { 00:18:19.410 "trtype": "VFIOUSER", 00:18:19.410 "adrfam": "IPv4", 00:18:19.410 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:19.410 "trsvcid": "0" 00:18:19.410 } 00:18:19.410 ], 00:18:19.410 "allow_any_host": true, 00:18:19.410 "hosts": [], 00:18:19.410 "serial_number": "SPDK1", 00:18:19.410 "model_number": "SPDK bdev Controller", 00:18:19.410 "max_namespaces": 32, 00:18:19.410 "min_cntlid": 1, 00:18:19.410 "max_cntlid": 65519, 00:18:19.410 "namespaces": [ 00:18:19.410 { 00:18:19.410 "nsid": 1, 00:18:19.410 "bdev_name": "Malloc1", 00:18:19.410 "name": "Malloc1", 00:18:19.410 "nguid": "FA08637045EA448EAC72CB55382669DC", 00:18:19.410 "uuid": "fa086370-45ea-448e-ac72-cb55382669dc" 00:18:19.410 }, 00:18:19.410 { 00:18:19.410 "nsid": 2, 00:18:19.410 "bdev_name": "Malloc3", 00:18:19.410 "name": "Malloc3", 00:18:19.410 "nguid": "26F1C2FF09F14FAD8C300A249B17C561", 00:18:19.410 "uuid": "26f1c2ff-09f1-4fad-8c30-0a249b17c561" 00:18:19.410 } 00:18:19.410 ] 00:18:19.410 }, 00:18:19.410 { 00:18:19.410 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:19.410 "subtype": "NVMe", 00:18:19.410 "listen_addresses": [ 00:18:19.410 { 00:18:19.410 "trtype": "VFIOUSER", 00:18:19.410 "adrfam": "IPv4", 00:18:19.410 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:19.410 "trsvcid": "0" 00:18:19.410 } 00:18:19.410 ], 00:18:19.410 "allow_any_host": true, 00:18:19.410 "hosts": [], 00:18:19.410 "serial_number": "SPDK2", 00:18:19.410 "model_number": "SPDK bdev Controller", 00:18:19.410 "max_namespaces": 32, 00:18:19.410 "min_cntlid": 1, 00:18:19.410 "max_cntlid": 65519, 00:18:19.410 "namespaces": [ 00:18:19.410 { 00:18:19.410 "nsid": 1, 00:18:19.410 "bdev_name": "Malloc2", 00:18:19.410 "name": "Malloc2", 00:18:19.410 "nguid": "45792A2F1BDC465FBF7F85F1D8347315", 00:18:19.410 "uuid": "45792a2f-1bdc-465f-bf7f-85f1d8347315" 00:18:19.410 } 00:18:19.410 ] 00:18:19.410 } 00:18:19.410 ] 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=233737 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:19.410 [2024-11-19 02:59:29.962226] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:19.410 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:19.977 Malloc4 00:18:19.977 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:19.977 [2024-11-19 02:59:30.570161] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:19.977 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:20.235 Asynchronous Event Request test 00:18:20.235 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:20.235 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:20.235 Registering asynchronous event callbacks... 00:18:20.235 Starting namespace attribute notice tests for all controllers... 00:18:20.235 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:20.235 aer_cb - Changed Namespace 00:18:20.235 Cleaning up... 00:18:20.235 [ 00:18:20.235 { 00:18:20.235 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:20.235 "subtype": "Discovery", 00:18:20.235 "listen_addresses": [], 00:18:20.235 "allow_any_host": true, 00:18:20.235 "hosts": [] 00:18:20.235 }, 00:18:20.235 { 00:18:20.235 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:20.235 "subtype": "NVMe", 00:18:20.235 "listen_addresses": [ 00:18:20.235 { 00:18:20.235 "trtype": "VFIOUSER", 00:18:20.235 "adrfam": "IPv4", 00:18:20.235 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:20.235 "trsvcid": "0" 00:18:20.235 } 00:18:20.235 ], 00:18:20.235 "allow_any_host": true, 00:18:20.235 "hosts": [], 00:18:20.235 "serial_number": "SPDK1", 00:18:20.235 "model_number": "SPDK bdev Controller", 00:18:20.235 "max_namespaces": 32, 00:18:20.235 "min_cntlid": 1, 00:18:20.235 "max_cntlid": 65519, 00:18:20.235 "namespaces": [ 00:18:20.235 { 00:18:20.235 "nsid": 1, 00:18:20.235 "bdev_name": "Malloc1", 00:18:20.235 "name": "Malloc1", 00:18:20.235 "nguid": "FA08637045EA448EAC72CB55382669DC", 00:18:20.235 "uuid": "fa086370-45ea-448e-ac72-cb55382669dc" 00:18:20.235 }, 00:18:20.235 { 00:18:20.235 "nsid": 2, 00:18:20.235 "bdev_name": "Malloc3", 00:18:20.235 "name": "Malloc3", 00:18:20.235 "nguid": "26F1C2FF09F14FAD8C300A249B17C561", 00:18:20.235 "uuid": "26f1c2ff-09f1-4fad-8c30-0a249b17c561" 00:18:20.235 } 00:18:20.235 ] 00:18:20.235 }, 00:18:20.235 { 00:18:20.235 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:20.235 "subtype": "NVMe", 00:18:20.235 "listen_addresses": [ 00:18:20.235 { 00:18:20.235 "trtype": "VFIOUSER", 00:18:20.235 "adrfam": "IPv4", 00:18:20.235 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:20.235 "trsvcid": "0" 00:18:20.235 } 00:18:20.235 ], 00:18:20.235 "allow_any_host": true, 00:18:20.235 "hosts": [], 00:18:20.235 "serial_number": "SPDK2", 00:18:20.235 "model_number": "SPDK bdev Controller", 00:18:20.235 "max_namespaces": 32, 00:18:20.235 "min_cntlid": 1, 00:18:20.235 "max_cntlid": 65519, 00:18:20.236 "namespaces": [ 00:18:20.236 { 00:18:20.236 "nsid": 1, 00:18:20.236 "bdev_name": "Malloc2", 00:18:20.236 "name": "Malloc2", 00:18:20.236 "nguid": "45792A2F1BDC465FBF7F85F1D8347315", 00:18:20.236 "uuid": "45792a2f-1bdc-465f-bf7f-85f1d8347315" 00:18:20.236 }, 00:18:20.236 { 00:18:20.236 "nsid": 2, 00:18:20.236 "bdev_name": "Malloc4", 00:18:20.236 "name": "Malloc4", 00:18:20.236 "nguid": "0F817D65049348E49756CF037FE1D9CD", 00:18:20.236 "uuid": "0f817d65-0493-48e4-9756-cf037fe1d9cd" 00:18:20.236 } 00:18:20.236 ] 00:18:20.236 } 00:18:20.236 ] 00:18:20.236 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 233737 00:18:20.236 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:20.236 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 228134 00:18:20.236 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 228134 ']' 00:18:20.236 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 228134 00:18:20.236 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:20.495 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.495 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 228134 00:18:20.495 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:20.495 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:20.495 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 228134' 00:18:20.495 killing process with pid 228134 00:18:20.495 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 228134 00:18:20.495 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 228134 00:18:20.755 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:20.755 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:20.755 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:20.755 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:20.755 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:20.755 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=233881 00:18:20.755 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:20.755 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 233881' 00:18:20.755 Process pid: 233881 00:18:20.755 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:20.755 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 233881 00:18:20.755 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 233881 ']' 00:18:20.755 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.755 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.755 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.755 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.755 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:20.755 [2024-11-19 02:59:31.247102] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:20.755 [2024-11-19 02:59:31.248144] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:18:20.755 [2024-11-19 02:59:31.248221] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.755 [2024-11-19 02:59:31.317713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:20.755 [2024-11-19 02:59:31.362077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.755 [2024-11-19 02:59:31.362136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.755 [2024-11-19 02:59:31.362164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.755 [2024-11-19 02:59:31.362175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.755 [2024-11-19 02:59:31.362184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.755 [2024-11-19 02:59:31.363668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.755 [2024-11-19 02:59:31.363737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.755 [2024-11-19 02:59:31.363799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:20.755 [2024-11-19 02:59:31.363802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.015 [2024-11-19 02:59:31.448412] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:21.015 [2024-11-19 02:59:31.448624] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:21.015 [2024-11-19 02:59:31.448890] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:21.015 [2024-11-19 02:59:31.449442] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:21.015 [2024-11-19 02:59:31.449661] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:21.015 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.015 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:21.015 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:21.953 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:22.213 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:22.213 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:22.213 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:22.213 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:22.213 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:22.783 Malloc1 00:18:22.783 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:23.042 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:23.300 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:23.557 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:23.558 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:23.558 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:23.816 Malloc2 00:18:23.816 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:24.074 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:24.331 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:24.590 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:24.590 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 233881 00:18:24.590 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 233881 ']' 00:18:24.590 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 233881 00:18:24.590 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:24.590 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.590 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 233881 00:18:24.590 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.590 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.590 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 233881' 00:18:24.590 killing process with pid 233881 00:18:24.590 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 233881 00:18:24.590 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 233881 00:18:24.849 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:24.849 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:24.849 00:18:24.849 real 0m53.793s 00:18:24.849 user 3m28.033s 00:18:24.849 sys 0m3.889s 00:18:24.849 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.849 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:24.849 ************************************ 00:18:24.849 END TEST nvmf_vfio_user 00:18:24.849 ************************************ 00:18:24.849 02:59:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:24.849 02:59:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:24.849 02:59:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.849 02:59:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:24.849 ************************************ 00:18:24.849 START TEST nvmf_vfio_user_nvme_compliance 00:18:24.849 ************************************ 00:18:24.849 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:25.109 * Looking for test storage... 00:18:25.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:25.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.109 --rc genhtml_branch_coverage=1 00:18:25.109 --rc genhtml_function_coverage=1 00:18:25.109 --rc genhtml_legend=1 00:18:25.109 --rc geninfo_all_blocks=1 00:18:25.109 --rc geninfo_unexecuted_blocks=1 00:18:25.109 00:18:25.109 ' 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:25.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.109 --rc genhtml_branch_coverage=1 00:18:25.109 --rc genhtml_function_coverage=1 00:18:25.109 --rc genhtml_legend=1 00:18:25.109 --rc geninfo_all_blocks=1 00:18:25.109 --rc geninfo_unexecuted_blocks=1 00:18:25.109 00:18:25.109 ' 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:25.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.109 --rc genhtml_branch_coverage=1 00:18:25.109 --rc genhtml_function_coverage=1 00:18:25.109 --rc genhtml_legend=1 00:18:25.109 --rc geninfo_all_blocks=1 00:18:25.109 --rc geninfo_unexecuted_blocks=1 00:18:25.109 00:18:25.109 ' 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:25.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.109 --rc genhtml_branch_coverage=1 00:18:25.109 --rc genhtml_function_coverage=1 00:18:25.109 --rc genhtml_legend=1 00:18:25.109 --rc geninfo_all_blocks=1 00:18:25.109 --rc geninfo_unexecuted_blocks=1 00:18:25.109 00:18:25.109 ' 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.109 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:25.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=234492 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 234492' 00:18:25.110 Process pid: 234492 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 234492 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 234492 ']' 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.110 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:25.110 [2024-11-19 02:59:35.648678] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:18:25.110 [2024-11-19 02:59:35.648799] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.110 [2024-11-19 02:59:35.714992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:25.369 [2024-11-19 02:59:35.764482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.369 [2024-11-19 02:59:35.764545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.369 [2024-11-19 02:59:35.764572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.369 [2024-11-19 02:59:35.764584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.369 [2024-11-19 02:59:35.764593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.369 [2024-11-19 02:59:35.766084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.369 [2024-11-19 02:59:35.766151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.369 [2024-11-19 02:59:35.766148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.369 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.369 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:18:25.369 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:26.304 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:26.304 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:26.304 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:26.304 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.304 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:26.304 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.304 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:26.304 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:26.304 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.304 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:26.563 malloc0 00:18:26.563 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.563 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:26.563 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.563 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:26.563 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.563 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:26.563 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.563 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:26.563 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.563 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:26.563 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.563 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:26.563 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.563 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:26.563 00:18:26.563 00:18:26.563 CUnit - A unit testing framework for C - Version 2.1-3 00:18:26.563 http://cunit.sourceforge.net/ 00:18:26.563 00:18:26.563 00:18:26.563 Suite: nvme_compliance 00:18:26.563 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-19 02:59:37.132957] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:26.563 [2024-11-19 02:59:37.134485] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:26.563 [2024-11-19 02:59:37.134509] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:26.563 [2024-11-19 02:59:37.134535] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:26.563 [2024-11-19 02:59:37.136005] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:26.563 passed 00:18:26.821 Test: admin_identify_ctrlr_verify_fused ...[2024-11-19 02:59:37.226614] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:26.821 [2024-11-19 02:59:37.229630] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:26.821 passed 00:18:26.821 Test: admin_identify_ns ...[2024-11-19 02:59:37.318895] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:26.821 [2024-11-19 02:59:37.378722] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:26.821 [2024-11-19 02:59:37.386703] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:26.821 [2024-11-19 02:59:37.407879] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:27.079 passed 00:18:27.079 Test: admin_get_features_mandatory_features ...[2024-11-19 02:59:37.491092] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:27.079 [2024-11-19 02:59:37.495117] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:27.079 passed 00:18:27.079 Test: admin_get_features_optional_features ...[2024-11-19 02:59:37.580648] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:27.079 [2024-11-19 02:59:37.583667] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:27.079 passed 00:18:27.079 Test: admin_set_features_number_of_queues ...[2024-11-19 02:59:37.672770] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:27.337 [2024-11-19 02:59:37.778810] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:27.337 passed 00:18:27.337 Test: admin_get_log_page_mandatory_logs ...[2024-11-19 02:59:37.863462] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:27.337 [2024-11-19 02:59:37.866486] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:27.337 passed 00:18:27.337 Test: admin_get_log_page_with_lpo ...[2024-11-19 02:59:37.951321] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:27.595 [2024-11-19 02:59:38.018734] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:27.595 [2024-11-19 02:59:38.031785] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:27.595 passed 00:18:27.595 Test: fabric_property_get ...[2024-11-19 02:59:38.115310] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:27.595 [2024-11-19 02:59:38.116588] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:27.595 [2024-11-19 02:59:38.118335] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:27.595 passed 00:18:27.595 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-19 02:59:38.203900] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:27.595 [2024-11-19 02:59:38.205241] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:27.595 [2024-11-19 02:59:38.206929] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:27.853 passed 00:18:27.853 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-19 02:59:38.289792] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:27.853 [2024-11-19 02:59:38.374716] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:27.853 [2024-11-19 02:59:38.390716] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:27.853 [2024-11-19 02:59:38.395816] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:27.853 passed 00:18:28.111 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-19 02:59:38.481316] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:28.111 [2024-11-19 02:59:38.482637] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:28.111 [2024-11-19 02:59:38.484340] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:28.111 passed 00:18:28.111 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-19 02:59:38.565495] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:28.111 [2024-11-19 02:59:38.641729] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:28.111 [2024-11-19 02:59:38.665717] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:28.111 [2024-11-19 02:59:38.670809] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:28.111 passed 00:18:28.369 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-19 02:59:38.757297] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:28.369 [2024-11-19 02:59:38.758619] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:28.369 [2024-11-19 02:59:38.758673] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:28.369 [2024-11-19 02:59:38.760318] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:28.369 passed 00:18:28.369 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-19 02:59:38.843508] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:28.369 [2024-11-19 02:59:38.934704] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:28.369 [2024-11-19 02:59:38.942716] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:28.369 [2024-11-19 02:59:38.950714] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:28.369 [2024-11-19 02:59:38.958714] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:28.627 [2024-11-19 02:59:38.987849] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:28.627 passed 00:18:28.627 Test: admin_create_io_sq_verify_pc ...[2024-11-19 02:59:39.071310] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:28.627 [2024-11-19 02:59:39.087712] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:28.627 [2024-11-19 02:59:39.105725] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:28.627 passed 00:18:28.627 Test: admin_create_io_qp_max_qps ...[2024-11-19 02:59:39.191283] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:30.001 [2024-11-19 02:59:40.301710] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:30.260 [2024-11-19 02:59:40.700553] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:30.260 passed 00:18:30.260 Test: admin_create_io_sq_shared_cq ...[2024-11-19 02:59:40.783468] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:30.518 [2024-11-19 02:59:40.913696] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:30.518 [2024-11-19 02:59:40.950798] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:30.518 passed 00:18:30.518 00:18:30.518 Run Summary: Type Total Ran Passed Failed Inactive 00:18:30.518 suites 1 1 n/a 0 0 00:18:30.518 tests 18 18 18 0 0 00:18:30.518 asserts 360 360 360 0 n/a 00:18:30.518 00:18:30.518 Elapsed time = 1.583 seconds 00:18:30.518 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 234492 00:18:30.518 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 234492 ']' 00:18:30.518 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 234492 00:18:30.518 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:18:30.518 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.518 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 234492 00:18:30.518 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:30.518 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:30.518 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 234492' 00:18:30.518 killing process with pid 234492 00:18:30.518 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 234492 00:18:30.518 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 234492 00:18:30.777 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:30.777 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:30.777 00:18:30.777 real 0m5.842s 00:18:30.777 user 0m16.440s 00:18:30.777 sys 0m0.577s 00:18:30.777 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.777 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:30.777 ************************************ 00:18:30.777 END TEST nvmf_vfio_user_nvme_compliance 00:18:30.777 ************************************ 00:18:30.777 02:59:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:30.777 02:59:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:30.777 02:59:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.777 02:59:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:30.777 ************************************ 00:18:30.777 START TEST nvmf_vfio_user_fuzz 00:18:30.777 ************************************ 00:18:30.777 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:30.777 * Looking for test storage... 00:18:30.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:30.777 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:30.777 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:18:30.777 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:31.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.037 --rc genhtml_branch_coverage=1 00:18:31.037 --rc genhtml_function_coverage=1 00:18:31.037 --rc genhtml_legend=1 00:18:31.037 --rc geninfo_all_blocks=1 00:18:31.037 --rc geninfo_unexecuted_blocks=1 00:18:31.037 00:18:31.037 ' 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:31.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.037 --rc genhtml_branch_coverage=1 00:18:31.037 --rc genhtml_function_coverage=1 00:18:31.037 --rc genhtml_legend=1 00:18:31.037 --rc geninfo_all_blocks=1 00:18:31.037 --rc geninfo_unexecuted_blocks=1 00:18:31.037 00:18:31.037 ' 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:31.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.037 --rc genhtml_branch_coverage=1 00:18:31.037 --rc genhtml_function_coverage=1 00:18:31.037 --rc genhtml_legend=1 00:18:31.037 --rc geninfo_all_blocks=1 00:18:31.037 --rc geninfo_unexecuted_blocks=1 00:18:31.037 00:18:31.037 ' 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:31.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.037 --rc genhtml_branch_coverage=1 00:18:31.037 --rc genhtml_function_coverage=1 00:18:31.037 --rc genhtml_legend=1 00:18:31.037 --rc geninfo_all_blocks=1 00:18:31.037 --rc geninfo_unexecuted_blocks=1 00:18:31.037 00:18:31.037 ' 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.037 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:31.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=235217 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 235217' 00:18:31.038 Process pid: 235217 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 235217 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 235217 ']' 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.038 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:31.297 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.297 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:31.297 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:32.233 malloc0 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:32.233 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:04.313 Fuzzing completed. Shutting down the fuzz application 00:19:04.313 00:19:04.313 Dumping successful admin opcodes: 00:19:04.313 8, 9, 10, 24, 00:19:04.313 Dumping successful io opcodes: 00:19:04.313 0, 00:19:04.313 NS: 0x20000081ef00 I/O qp, Total commands completed: 681086, total successful commands: 2651, random_seed: 3819713984 00:19:04.313 NS: 0x20000081ef00 admin qp, Total commands completed: 87112, total successful commands: 696, random_seed: 1773133440 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 235217 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 235217 ']' 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 235217 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 235217 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 235217' 00:19:04.313 killing process with pid 235217 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 235217 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 235217 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:04.313 00:19:04.313 real 0m32.161s 00:19:04.313 user 0m33.995s 00:19:04.313 sys 0m25.137s 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:04.313 ************************************ 00:19:04.313 END TEST nvmf_vfio_user_fuzz 00:19:04.313 ************************************ 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:04.313 ************************************ 00:19:04.313 START TEST nvmf_auth_target 00:19:04.313 ************************************ 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:04.313 * Looking for test storage... 00:19:04.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:04.313 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:04.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.314 --rc genhtml_branch_coverage=1 00:19:04.314 --rc genhtml_function_coverage=1 00:19:04.314 --rc genhtml_legend=1 00:19:04.314 --rc geninfo_all_blocks=1 00:19:04.314 --rc geninfo_unexecuted_blocks=1 00:19:04.314 00:19:04.314 ' 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:04.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.314 --rc genhtml_branch_coverage=1 00:19:04.314 --rc genhtml_function_coverage=1 00:19:04.314 --rc genhtml_legend=1 00:19:04.314 --rc geninfo_all_blocks=1 00:19:04.314 --rc geninfo_unexecuted_blocks=1 00:19:04.314 00:19:04.314 ' 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:04.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.314 --rc genhtml_branch_coverage=1 00:19:04.314 --rc genhtml_function_coverage=1 00:19:04.314 --rc genhtml_legend=1 00:19:04.314 --rc geninfo_all_blocks=1 00:19:04.314 --rc geninfo_unexecuted_blocks=1 00:19:04.314 00:19:04.314 ' 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:04.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.314 --rc genhtml_branch_coverage=1 00:19:04.314 --rc genhtml_function_coverage=1 00:19:04.314 --rc genhtml_legend=1 00:19:04.314 --rc geninfo_all_blocks=1 00:19:04.314 --rc geninfo_unexecuted_blocks=1 00:19:04.314 00:19:04.314 ' 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:04.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:04.314 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.254 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:05.254 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:05.254 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:05.254 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:05.254 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:05.255 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:05.255 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:05.255 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:05.255 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.255 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:05.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:19:05.515 00:19:05.515 --- 10.0.0.2 ping statistics --- 00:19:05.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.515 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:19:05.515 00:19:05.515 --- 10.0.0.1 ping statistics --- 00:19:05.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.515 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=241275 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 241275 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 241275 ']' 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.515 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=241301 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1dd36761495e601d55f7f92a1523b1b063693a892f0f2ef5 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.gUO 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1dd36761495e601d55f7f92a1523b1b063693a892f0f2ef5 0 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1dd36761495e601d55f7f92a1523b1b063693a892f0f2ef5 0 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1dd36761495e601d55f7f92a1523b1b063693a892f0f2ef5 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.gUO 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.gUO 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.gUO 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9ca9a291988b323b5bd232b945baed4a529a9c910a7e0774f0018a32e26f5db3 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Mnb 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9ca9a291988b323b5bd232b945baed4a529a9c910a7e0774f0018a32e26f5db3 3 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9ca9a291988b323b5bd232b945baed4a529a9c910a7e0774f0018a32e26f5db3 3 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9ca9a291988b323b5bd232b945baed4a529a9c910a7e0774f0018a32e26f5db3 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Mnb 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Mnb 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Mnb 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=80359cbbb1b75474ede9840b7f9b293d 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.4dD 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 80359cbbb1b75474ede9840b7f9b293d 1 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 80359cbbb1b75474ede9840b7f9b293d 1 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=80359cbbb1b75474ede9840b7f9b293d 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.4dD 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.4dD 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.4dD 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fc5d4e5f64e300f4c0ceb556b0cb17cf8c30d7aa0dc8adf5 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4nD 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fc5d4e5f64e300f4c0ceb556b0cb17cf8c30d7aa0dc8adf5 2 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fc5d4e5f64e300f4c0ceb556b0cb17cf8c30d7aa0dc8adf5 2 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fc5d4e5f64e300f4c0ceb556b0cb17cf8c30d7aa0dc8adf5 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:05.775 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:06.035 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4nD 00:19:06.035 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4nD 00:19:06.035 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.4nD 00:19:06.035 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:06.035 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:06.035 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.035 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:06.035 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:06.035 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:06.035 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:06.035 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ca3bd2c615146fe512b47f5b131419f331a711c6335b447f 00:19:06.035 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:06.035 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Ldh 00:19:06.035 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ca3bd2c615146fe512b47f5b131419f331a711c6335b447f 2 00:19:06.035 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ca3bd2c615146fe512b47f5b131419f331a711c6335b447f 2 00:19:06.035 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ca3bd2c615146fe512b47f5b131419f331a711c6335b447f 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Ldh 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Ldh 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Ldh 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e45766a5fb46dbbb33081e14da635e3d 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.6cm 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e45766a5fb46dbbb33081e14da635e3d 1 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e45766a5fb46dbbb33081e14da635e3d 1 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e45766a5fb46dbbb33081e14da635e3d 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.6cm 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.6cm 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.6cm 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f3fc4af3f3d1fd73f2fce2a7a3395ee50c67de75084c42a358d7675797ac0e79 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.puC 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f3fc4af3f3d1fd73f2fce2a7a3395ee50c67de75084c42a358d7675797ac0e79 3 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f3fc4af3f3d1fd73f2fce2a7a3395ee50c67de75084c42a358d7675797ac0e79 3 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f3fc4af3f3d1fd73f2fce2a7a3395ee50c67de75084c42a358d7675797ac0e79 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.puC 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.puC 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.puC 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 241275 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 241275 ']' 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.036 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.294 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.294 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:06.294 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 241301 /var/tmp/host.sock 00:19:06.294 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 241301 ']' 00:19:06.294 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:06.294 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.294 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:06.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:06.294 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.294 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.552 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.552 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:06.552 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:06.552 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.552 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.552 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.552 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:06.552 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gUO 00:19:06.552 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.552 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.552 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.552 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.gUO 00:19:06.552 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.gUO 00:19:06.811 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Mnb ]] 00:19:06.811 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Mnb 00:19:06.811 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.811 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.811 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.811 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Mnb 00:19:06.811 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Mnb 00:19:07.378 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:07.378 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.4dD 00:19:07.378 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.378 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.378 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.378 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.4dD 00:19:07.378 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.4dD 00:19:07.378 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.4nD ]] 00:19:07.378 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4nD 00:19:07.378 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.378 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.378 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.378 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4nD 00:19:07.378 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4nD 00:19:07.945 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:07.945 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Ldh 00:19:07.945 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.945 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.945 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.945 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Ldh 00:19:07.945 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Ldh 00:19:07.945 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.6cm ]] 00:19:07.945 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6cm 00:19:07.945 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.945 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.945 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.945 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6cm 00:19:07.945 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6cm 00:19:08.203 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:08.203 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.puC 00:19:08.203 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.203 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.461 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.462 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.puC 00:19:08.462 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.puC 00:19:08.720 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:08.720 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:08.720 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.720 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.720 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.720 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.978 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:08.978 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.978 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:08.978 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:08.978 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:08.978 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.978 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.978 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.978 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.978 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.978 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.978 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.978 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.237 00:19:09.237 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.237 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.237 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.495 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.495 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.495 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.495 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.495 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.495 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.495 { 00:19:09.495 "cntlid": 1, 00:19:09.495 "qid": 0, 00:19:09.495 "state": "enabled", 00:19:09.495 "thread": "nvmf_tgt_poll_group_000", 00:19:09.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:09.495 "listen_address": { 00:19:09.495 "trtype": "TCP", 00:19:09.495 "adrfam": "IPv4", 00:19:09.495 "traddr": "10.0.0.2", 00:19:09.495 "trsvcid": "4420" 00:19:09.495 }, 00:19:09.495 "peer_address": { 00:19:09.495 "trtype": "TCP", 00:19:09.495 "adrfam": "IPv4", 00:19:09.495 "traddr": "10.0.0.1", 00:19:09.495 "trsvcid": "43070" 00:19:09.495 }, 00:19:09.495 "auth": { 00:19:09.495 "state": "completed", 00:19:09.495 "digest": "sha256", 00:19:09.495 "dhgroup": "null" 00:19:09.495 } 00:19:09.495 } 00:19:09.495 ]' 00:19:09.495 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.495 03:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.495 03:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.495 03:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:09.495 03:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.495 03:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.495 03:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.495 03:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.062 03:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:19:10.062 03:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:19:15.329 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.329 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.329 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.329 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.329 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.329 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.329 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:15.329 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.329 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.329 { 00:19:15.329 "cntlid": 3, 00:19:15.329 "qid": 0, 00:19:15.329 "state": "enabled", 00:19:15.329 "thread": "nvmf_tgt_poll_group_000", 00:19:15.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:15.329 "listen_address": { 00:19:15.329 "trtype": "TCP", 00:19:15.329 "adrfam": "IPv4", 00:19:15.329 "traddr": "10.0.0.2", 00:19:15.329 "trsvcid": "4420" 00:19:15.329 }, 00:19:15.329 "peer_address": { 00:19:15.329 "trtype": "TCP", 00:19:15.329 "adrfam": "IPv4", 00:19:15.329 "traddr": "10.0.0.1", 00:19:15.329 "trsvcid": "43098" 00:19:15.329 }, 00:19:15.329 "auth": { 00:19:15.329 "state": "completed", 00:19:15.329 "digest": "sha256", 00:19:15.329 "dhgroup": "null" 00:19:15.329 } 00:19:15.329 } 00:19:15.329 ]' 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.329 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.588 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:19:15.588 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:19:16.522 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.522 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.522 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.522 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.522 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.522 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.522 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:16.522 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:16.780 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:16.780 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.781 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:16.781 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:16.781 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:16.781 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.781 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.781 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.781 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.781 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.781 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.781 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.781 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.347 00:19:17.347 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.347 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.347 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.347 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.347 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.347 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.347 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.606 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.606 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.606 { 00:19:17.606 "cntlid": 5, 00:19:17.606 "qid": 0, 00:19:17.606 "state": "enabled", 00:19:17.606 "thread": "nvmf_tgt_poll_group_000", 00:19:17.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:17.606 "listen_address": { 00:19:17.606 "trtype": "TCP", 00:19:17.606 "adrfam": "IPv4", 00:19:17.606 "traddr": "10.0.0.2", 00:19:17.606 "trsvcid": "4420" 00:19:17.606 }, 00:19:17.606 "peer_address": { 00:19:17.606 "trtype": "TCP", 00:19:17.606 "adrfam": "IPv4", 00:19:17.606 "traddr": "10.0.0.1", 00:19:17.606 "trsvcid": "60462" 00:19:17.606 }, 00:19:17.606 "auth": { 00:19:17.606 "state": "completed", 00:19:17.606 "digest": "sha256", 00:19:17.606 "dhgroup": "null" 00:19:17.606 } 00:19:17.606 } 00:19:17.606 ]' 00:19:17.606 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.606 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.606 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.606 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:17.606 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.606 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.606 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.606 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.864 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:19:17.864 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:19:18.798 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.798 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.798 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.798 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.798 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.798 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.798 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:18.798 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:19.055 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:19.055 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.055 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:19.055 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:19.055 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:19.055 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.055 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:19.055 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.055 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.055 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.055 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:19.055 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:19.055 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:19.314 00:19:19.314 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.314 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.314 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.572 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.572 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.572 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.572 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.572 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.572 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.572 { 00:19:19.572 "cntlid": 7, 00:19:19.572 "qid": 0, 00:19:19.572 "state": "enabled", 00:19:19.572 "thread": "nvmf_tgt_poll_group_000", 00:19:19.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:19.572 "listen_address": { 00:19:19.572 "trtype": "TCP", 00:19:19.572 "adrfam": "IPv4", 00:19:19.572 "traddr": "10.0.0.2", 00:19:19.572 "trsvcid": "4420" 00:19:19.572 }, 00:19:19.572 "peer_address": { 00:19:19.572 "trtype": "TCP", 00:19:19.572 "adrfam": "IPv4", 00:19:19.572 "traddr": "10.0.0.1", 00:19:19.572 "trsvcid": "60486" 00:19:19.572 }, 00:19:19.572 "auth": { 00:19:19.572 "state": "completed", 00:19:19.572 "digest": "sha256", 00:19:19.572 "dhgroup": "null" 00:19:19.572 } 00:19:19.572 } 00:19:19.572 ]' 00:19:19.572 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.572 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.572 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.830 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:19.830 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.830 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.830 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.830 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.089 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:19:20.089 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:19:21.024 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.024 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.024 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.024 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.024 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.024 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.024 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.024 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:21.024 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:21.284 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:21.284 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.284 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:21.284 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:21.284 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:21.284 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.284 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.284 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.284 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.284 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.284 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.284 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.284 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.542 00:19:21.542 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.542 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.542 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.800 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.800 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.800 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.800 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.800 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.800 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.800 { 00:19:21.800 "cntlid": 9, 00:19:21.800 "qid": 0, 00:19:21.800 "state": "enabled", 00:19:21.800 "thread": "nvmf_tgt_poll_group_000", 00:19:21.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:21.800 "listen_address": { 00:19:21.800 "trtype": "TCP", 00:19:21.800 "adrfam": "IPv4", 00:19:21.800 "traddr": "10.0.0.2", 00:19:21.800 "trsvcid": "4420" 00:19:21.800 }, 00:19:21.800 "peer_address": { 00:19:21.800 "trtype": "TCP", 00:19:21.800 "adrfam": "IPv4", 00:19:21.800 "traddr": "10.0.0.1", 00:19:21.800 "trsvcid": "60518" 00:19:21.800 }, 00:19:21.800 "auth": { 00:19:21.800 "state": "completed", 00:19:21.800 "digest": "sha256", 00:19:21.800 "dhgroup": "ffdhe2048" 00:19:21.800 } 00:19:21.800 } 00:19:21.800 ]' 00:19:21.800 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.058 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.058 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.058 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:22.058 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.058 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.058 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.058 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.316 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:19:22.316 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:19:23.250 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.250 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.250 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.250 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.250 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.250 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.250 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:23.250 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:23.508 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:23.508 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.508 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:23.508 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:23.508 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:23.508 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.508 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.508 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.508 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.508 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.508 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.508 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.508 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.766 00:19:23.766 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.766 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.766 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.024 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.024 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.024 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.024 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.024 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.024 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.024 { 00:19:24.024 "cntlid": 11, 00:19:24.024 "qid": 0, 00:19:24.024 "state": "enabled", 00:19:24.024 "thread": "nvmf_tgt_poll_group_000", 00:19:24.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:24.024 "listen_address": { 00:19:24.024 "trtype": "TCP", 00:19:24.024 "adrfam": "IPv4", 00:19:24.024 "traddr": "10.0.0.2", 00:19:24.024 "trsvcid": "4420" 00:19:24.024 }, 00:19:24.024 "peer_address": { 00:19:24.024 "trtype": "TCP", 00:19:24.024 "adrfam": "IPv4", 00:19:24.024 "traddr": "10.0.0.1", 00:19:24.024 "trsvcid": "60538" 00:19:24.024 }, 00:19:24.024 "auth": { 00:19:24.024 "state": "completed", 00:19:24.024 "digest": "sha256", 00:19:24.024 "dhgroup": "ffdhe2048" 00:19:24.024 } 00:19:24.024 } 00:19:24.024 ]' 00:19:24.024 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.282 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.282 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.282 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:24.282 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.282 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.282 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.282 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.540 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:19:24.540 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:19:25.474 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.474 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.474 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.474 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.474 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.474 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.474 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:25.474 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:25.733 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:25.733 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.733 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:25.733 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:25.733 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:25.733 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.733 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.733 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.733 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.733 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.733 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.733 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.733 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.992 00:19:25.992 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.992 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.992 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.251 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.251 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.251 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.251 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.251 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.251 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.251 { 00:19:26.251 "cntlid": 13, 00:19:26.251 "qid": 0, 00:19:26.251 "state": "enabled", 00:19:26.251 "thread": "nvmf_tgt_poll_group_000", 00:19:26.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:26.251 "listen_address": { 00:19:26.251 "trtype": "TCP", 00:19:26.251 "adrfam": "IPv4", 00:19:26.251 "traddr": "10.0.0.2", 00:19:26.251 "trsvcid": "4420" 00:19:26.251 }, 00:19:26.251 "peer_address": { 00:19:26.251 "trtype": "TCP", 00:19:26.251 "adrfam": "IPv4", 00:19:26.251 "traddr": "10.0.0.1", 00:19:26.251 "trsvcid": "60560" 00:19:26.251 }, 00:19:26.251 "auth": { 00:19:26.251 "state": "completed", 00:19:26.251 "digest": "sha256", 00:19:26.251 "dhgroup": "ffdhe2048" 00:19:26.251 } 00:19:26.251 } 00:19:26.251 ]' 00:19:26.251 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.251 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.251 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.510 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:26.510 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.510 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.510 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.510 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.769 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:19:26.769 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:19:27.705 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.705 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.705 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.705 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.705 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.705 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.705 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.705 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.963 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:27.963 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.963 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.964 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:27.964 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:27.964 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.964 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:27.964 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.964 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.964 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.964 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:27.964 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.964 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.222 00:19:28.222 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.222 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.222 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.480 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.480 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.480 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.480 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.480 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.480 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.480 { 00:19:28.480 "cntlid": 15, 00:19:28.480 "qid": 0, 00:19:28.480 "state": "enabled", 00:19:28.480 "thread": "nvmf_tgt_poll_group_000", 00:19:28.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:28.480 "listen_address": { 00:19:28.480 "trtype": "TCP", 00:19:28.480 "adrfam": "IPv4", 00:19:28.480 "traddr": "10.0.0.2", 00:19:28.480 "trsvcid": "4420" 00:19:28.480 }, 00:19:28.480 "peer_address": { 00:19:28.480 "trtype": "TCP", 00:19:28.480 "adrfam": "IPv4", 00:19:28.480 "traddr": "10.0.0.1", 00:19:28.480 "trsvcid": "34564" 00:19:28.480 }, 00:19:28.480 "auth": { 00:19:28.480 "state": "completed", 00:19:28.480 "digest": "sha256", 00:19:28.480 "dhgroup": "ffdhe2048" 00:19:28.480 } 00:19:28.480 } 00:19:28.480 ]' 00:19:28.480 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.480 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.480 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.480 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:28.739 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.739 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.739 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.739 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.997 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:19:28.997 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:19:29.932 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.932 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.932 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.932 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.932 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.932 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.932 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.932 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:29.932 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:30.191 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:30.191 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.191 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:30.191 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:30.191 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:30.191 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.191 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.191 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.191 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.191 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.191 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.191 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.191 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.450 00:19:30.450 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.450 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.450 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.709 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.709 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.709 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.709 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.709 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.709 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.709 { 00:19:30.709 "cntlid": 17, 00:19:30.709 "qid": 0, 00:19:30.709 "state": "enabled", 00:19:30.709 "thread": "nvmf_tgt_poll_group_000", 00:19:30.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:30.709 "listen_address": { 00:19:30.709 "trtype": "TCP", 00:19:30.709 "adrfam": "IPv4", 00:19:30.709 "traddr": "10.0.0.2", 00:19:30.709 "trsvcid": "4420" 00:19:30.709 }, 00:19:30.709 "peer_address": { 00:19:30.709 "trtype": "TCP", 00:19:30.709 "adrfam": "IPv4", 00:19:30.709 "traddr": "10.0.0.1", 00:19:30.709 "trsvcid": "34588" 00:19:30.709 }, 00:19:30.709 "auth": { 00:19:30.709 "state": "completed", 00:19:30.709 "digest": "sha256", 00:19:30.709 "dhgroup": "ffdhe3072" 00:19:30.709 } 00:19:30.709 } 00:19:30.709 ]' 00:19:30.709 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.709 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.709 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.709 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:30.709 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.709 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.709 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.709 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.277 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:19:31.277 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:19:31.844 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.102 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.102 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.102 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.102 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.102 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.102 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:32.102 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:32.361 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:32.361 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.361 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.361 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:32.361 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:32.361 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.361 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.361 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.361 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.361 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.361 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.361 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.361 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.620 00:19:32.620 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.620 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.620 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.878 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.878 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.878 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.878 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.878 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.878 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.878 { 00:19:32.878 "cntlid": 19, 00:19:32.878 "qid": 0, 00:19:32.878 "state": "enabled", 00:19:32.878 "thread": "nvmf_tgt_poll_group_000", 00:19:32.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:32.878 "listen_address": { 00:19:32.878 "trtype": "TCP", 00:19:32.878 "adrfam": "IPv4", 00:19:32.878 "traddr": "10.0.0.2", 00:19:32.878 "trsvcid": "4420" 00:19:32.878 }, 00:19:32.878 "peer_address": { 00:19:32.878 "trtype": "TCP", 00:19:32.878 "adrfam": "IPv4", 00:19:32.878 "traddr": "10.0.0.1", 00:19:32.878 "trsvcid": "34614" 00:19:32.878 }, 00:19:32.878 "auth": { 00:19:32.878 "state": "completed", 00:19:32.878 "digest": "sha256", 00:19:32.878 "dhgroup": "ffdhe3072" 00:19:32.878 } 00:19:32.878 } 00:19:32.878 ]' 00:19:32.878 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.878 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.878 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.878 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:32.878 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.137 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.137 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.137 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.395 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:19:33.395 03:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:19:34.331 03:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.331 03:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.331 03:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.331 03:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.331 03:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.331 03:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.331 03:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:34.331 03:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:34.589 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:34.589 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.589 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.589 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:34.589 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:34.589 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.589 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.589 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.589 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.589 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.589 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.589 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.589 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.847 00:19:34.847 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.847 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.847 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.105 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.105 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.105 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.105 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.105 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.105 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.105 { 00:19:35.105 "cntlid": 21, 00:19:35.105 "qid": 0, 00:19:35.105 "state": "enabled", 00:19:35.105 "thread": "nvmf_tgt_poll_group_000", 00:19:35.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:35.105 "listen_address": { 00:19:35.105 "trtype": "TCP", 00:19:35.105 "adrfam": "IPv4", 00:19:35.105 "traddr": "10.0.0.2", 00:19:35.105 "trsvcid": "4420" 00:19:35.105 }, 00:19:35.105 "peer_address": { 00:19:35.105 "trtype": "TCP", 00:19:35.105 "adrfam": "IPv4", 00:19:35.105 "traddr": "10.0.0.1", 00:19:35.105 "trsvcid": "34640" 00:19:35.105 }, 00:19:35.105 "auth": { 00:19:35.105 "state": "completed", 00:19:35.105 "digest": "sha256", 00:19:35.105 "dhgroup": "ffdhe3072" 00:19:35.105 } 00:19:35.105 } 00:19:35.105 ]' 00:19:35.105 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.364 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.364 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.364 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:35.364 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.364 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.364 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.364 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.624 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:19:35.624 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:19:36.558 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.558 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.558 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.558 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.558 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.558 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.558 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:36.558 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:36.817 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:36.817 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.817 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:36.817 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:36.817 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:36.817 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.817 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:36.817 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.817 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.817 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.817 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:36.817 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.817 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:37.075 00:19:37.075 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.075 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.075 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.334 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.334 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.334 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.334 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.334 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.334 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.334 { 00:19:37.334 "cntlid": 23, 00:19:37.334 "qid": 0, 00:19:37.334 "state": "enabled", 00:19:37.334 "thread": "nvmf_tgt_poll_group_000", 00:19:37.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:37.334 "listen_address": { 00:19:37.334 "trtype": "TCP", 00:19:37.334 "adrfam": "IPv4", 00:19:37.334 "traddr": "10.0.0.2", 00:19:37.334 "trsvcid": "4420" 00:19:37.334 }, 00:19:37.334 "peer_address": { 00:19:37.334 "trtype": "TCP", 00:19:37.334 "adrfam": "IPv4", 00:19:37.334 "traddr": "10.0.0.1", 00:19:37.334 "trsvcid": "43556" 00:19:37.334 }, 00:19:37.334 "auth": { 00:19:37.334 "state": "completed", 00:19:37.334 "digest": "sha256", 00:19:37.334 "dhgroup": "ffdhe3072" 00:19:37.334 } 00:19:37.334 } 00:19:37.334 ]' 00:19:37.334 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.592 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.592 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.592 03:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:37.592 03:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.592 03:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.592 03:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.592 03:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.851 03:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:19:37.851 03:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:19:38.785 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.785 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.785 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.785 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.785 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.785 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.785 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.785 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:38.785 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.043 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:39.043 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.043 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.043 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:39.043 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:39.043 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.043 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.043 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.043 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.043 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.043 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.043 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.043 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.301 00:19:39.301 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.301 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.301 03:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.867 03:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.867 03:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.867 03:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.867 03:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.867 03:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.867 03:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.867 { 00:19:39.867 "cntlid": 25, 00:19:39.867 "qid": 0, 00:19:39.867 "state": "enabled", 00:19:39.867 "thread": "nvmf_tgt_poll_group_000", 00:19:39.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:39.867 "listen_address": { 00:19:39.867 "trtype": "TCP", 00:19:39.867 "adrfam": "IPv4", 00:19:39.867 "traddr": "10.0.0.2", 00:19:39.867 "trsvcid": "4420" 00:19:39.867 }, 00:19:39.867 "peer_address": { 00:19:39.867 "trtype": "TCP", 00:19:39.867 "adrfam": "IPv4", 00:19:39.867 "traddr": "10.0.0.1", 00:19:39.867 "trsvcid": "43570" 00:19:39.867 }, 00:19:39.867 "auth": { 00:19:39.867 "state": "completed", 00:19:39.867 "digest": "sha256", 00:19:39.867 "dhgroup": "ffdhe4096" 00:19:39.867 } 00:19:39.867 } 00:19:39.867 ]' 00:19:39.867 03:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.867 03:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.867 03:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.867 03:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:39.867 03:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.867 03:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.867 03:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.867 03:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.126 03:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:19:40.126 03:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:19:41.061 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.061 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.061 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.061 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.061 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.061 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.061 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:41.061 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:41.320 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:41.320 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.320 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.320 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:41.320 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:41.320 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.320 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.320 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.320 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.320 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.320 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.320 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.320 03:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.887 00:19:41.887 03:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.887 03:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.887 03:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.173 03:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.173 03:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.173 03:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.173 03:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.173 03:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.173 03:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.173 { 00:19:42.173 "cntlid": 27, 00:19:42.173 "qid": 0, 00:19:42.173 "state": "enabled", 00:19:42.173 "thread": "nvmf_tgt_poll_group_000", 00:19:42.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:42.173 "listen_address": { 00:19:42.173 "trtype": "TCP", 00:19:42.173 "adrfam": "IPv4", 00:19:42.173 "traddr": "10.0.0.2", 00:19:42.173 "trsvcid": "4420" 00:19:42.173 }, 00:19:42.173 "peer_address": { 00:19:42.173 "trtype": "TCP", 00:19:42.173 "adrfam": "IPv4", 00:19:42.173 "traddr": "10.0.0.1", 00:19:42.173 "trsvcid": "43598" 00:19:42.173 }, 00:19:42.173 "auth": { 00:19:42.173 "state": "completed", 00:19:42.173 "digest": "sha256", 00:19:42.173 "dhgroup": "ffdhe4096" 00:19:42.173 } 00:19:42.173 } 00:19:42.173 ]' 00:19:42.173 03:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.173 03:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.173 03:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.173 03:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.173 03:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.173 03:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.173 03:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.173 03:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.432 03:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:19:42.432 03:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:19:43.364 03:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.364 03:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.364 03:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.364 03:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.364 03:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.364 03:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.364 03:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:43.364 03:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:43.623 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:43.623 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.623 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.623 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:43.623 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:43.623 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.623 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.623 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.623 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.880 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.880 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.880 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.880 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.137 00:19:44.137 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.137 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.138 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.395 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.395 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.395 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.395 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.395 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.395 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.395 { 00:19:44.395 "cntlid": 29, 00:19:44.395 "qid": 0, 00:19:44.395 "state": "enabled", 00:19:44.396 "thread": "nvmf_tgt_poll_group_000", 00:19:44.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:44.396 "listen_address": { 00:19:44.396 "trtype": "TCP", 00:19:44.396 "adrfam": "IPv4", 00:19:44.396 "traddr": "10.0.0.2", 00:19:44.396 "trsvcid": "4420" 00:19:44.396 }, 00:19:44.396 "peer_address": { 00:19:44.396 "trtype": "TCP", 00:19:44.396 "adrfam": "IPv4", 00:19:44.396 "traddr": "10.0.0.1", 00:19:44.396 "trsvcid": "43624" 00:19:44.396 }, 00:19:44.396 "auth": { 00:19:44.396 "state": "completed", 00:19:44.396 "digest": "sha256", 00:19:44.396 "dhgroup": "ffdhe4096" 00:19:44.396 } 00:19:44.396 } 00:19:44.396 ]' 00:19:44.396 03:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.654 03:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.654 03:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.654 03:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:44.654 03:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.654 03:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.654 03:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.654 03:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.912 03:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:19:44.912 03:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:19:45.847 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.847 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.847 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.847 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.847 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.847 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.847 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.847 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.105 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:46.105 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.105 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.105 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:46.105 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:46.105 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.105 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:46.105 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.105 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.105 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.105 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:46.105 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.105 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.364 00:19:46.364 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.364 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.364 03:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.622 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.622 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.622 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.622 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.880 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.880 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.880 { 00:19:46.880 "cntlid": 31, 00:19:46.880 "qid": 0, 00:19:46.880 "state": "enabled", 00:19:46.880 "thread": "nvmf_tgt_poll_group_000", 00:19:46.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:46.880 "listen_address": { 00:19:46.880 "trtype": "TCP", 00:19:46.880 "adrfam": "IPv4", 00:19:46.880 "traddr": "10.0.0.2", 00:19:46.880 "trsvcid": "4420" 00:19:46.880 }, 00:19:46.880 "peer_address": { 00:19:46.880 "trtype": "TCP", 00:19:46.880 "adrfam": "IPv4", 00:19:46.880 "traddr": "10.0.0.1", 00:19:46.880 "trsvcid": "43640" 00:19:46.880 }, 00:19:46.880 "auth": { 00:19:46.880 "state": "completed", 00:19:46.880 "digest": "sha256", 00:19:46.880 "dhgroup": "ffdhe4096" 00:19:46.880 } 00:19:46.880 } 00:19:46.880 ]' 00:19:46.880 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.880 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.880 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.880 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:46.880 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.880 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.880 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.880 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.139 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:19:47.139 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:19:48.072 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.072 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.072 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.072 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.072 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.072 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.072 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.072 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.072 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.330 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:48.330 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.330 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.330 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:48.330 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:48.330 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.330 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.330 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.330 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.330 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.330 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.330 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.330 03:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.897 00:19:48.897 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.897 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.897 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.156 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.156 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.156 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.156 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.156 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.156 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.156 { 00:19:49.156 "cntlid": 33, 00:19:49.156 "qid": 0, 00:19:49.156 "state": "enabled", 00:19:49.156 "thread": "nvmf_tgt_poll_group_000", 00:19:49.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:49.156 "listen_address": { 00:19:49.156 "trtype": "TCP", 00:19:49.156 "adrfam": "IPv4", 00:19:49.156 "traddr": "10.0.0.2", 00:19:49.156 "trsvcid": "4420" 00:19:49.156 }, 00:19:49.156 "peer_address": { 00:19:49.156 "trtype": "TCP", 00:19:49.156 "adrfam": "IPv4", 00:19:49.156 "traddr": "10.0.0.1", 00:19:49.156 "trsvcid": "40338" 00:19:49.156 }, 00:19:49.156 "auth": { 00:19:49.156 "state": "completed", 00:19:49.156 "digest": "sha256", 00:19:49.156 "dhgroup": "ffdhe6144" 00:19:49.156 } 00:19:49.156 } 00:19:49.156 ]' 00:19:49.156 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.156 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.156 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.156 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.156 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.156 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.156 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.156 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.416 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:19:49.416 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:19:50.351 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.351 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.351 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.351 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.351 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.351 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.351 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:50.351 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:50.610 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:50.610 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.610 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.610 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:50.610 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:50.610 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.610 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.610 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.610 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.610 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.610 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.610 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.610 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.178 00:19:51.178 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.178 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.178 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.744 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.744 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.744 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.744 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.744 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.744 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.744 { 00:19:51.744 "cntlid": 35, 00:19:51.744 "qid": 0, 00:19:51.744 "state": "enabled", 00:19:51.744 "thread": "nvmf_tgt_poll_group_000", 00:19:51.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:51.744 "listen_address": { 00:19:51.744 "trtype": "TCP", 00:19:51.744 "adrfam": "IPv4", 00:19:51.744 "traddr": "10.0.0.2", 00:19:51.744 "trsvcid": "4420" 00:19:51.744 }, 00:19:51.744 "peer_address": { 00:19:51.744 "trtype": "TCP", 00:19:51.744 "adrfam": "IPv4", 00:19:51.744 "traddr": "10.0.0.1", 00:19:51.744 "trsvcid": "40368" 00:19:51.744 }, 00:19:51.744 "auth": { 00:19:51.744 "state": "completed", 00:19:51.744 "digest": "sha256", 00:19:51.744 "dhgroup": "ffdhe6144" 00:19:51.744 } 00:19:51.744 } 00:19:51.744 ]' 00:19:51.744 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.744 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.744 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.744 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:51.744 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.744 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.744 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.744 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.004 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:19:52.004 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:19:52.935 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.936 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.936 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.936 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.936 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.936 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.936 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:52.936 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.194 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:53.194 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.194 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.194 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:53.194 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:53.194 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.194 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.194 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.194 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.194 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.194 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.194 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.194 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.760 00:19:53.760 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.760 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.760 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.019 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.019 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.019 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.019 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.019 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.019 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.019 { 00:19:54.019 "cntlid": 37, 00:19:54.019 "qid": 0, 00:19:54.019 "state": "enabled", 00:19:54.019 "thread": "nvmf_tgt_poll_group_000", 00:19:54.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:54.019 "listen_address": { 00:19:54.019 "trtype": "TCP", 00:19:54.019 "adrfam": "IPv4", 00:19:54.019 "traddr": "10.0.0.2", 00:19:54.019 "trsvcid": "4420" 00:19:54.019 }, 00:19:54.019 "peer_address": { 00:19:54.019 "trtype": "TCP", 00:19:54.019 "adrfam": "IPv4", 00:19:54.019 "traddr": "10.0.0.1", 00:19:54.019 "trsvcid": "40392" 00:19:54.019 }, 00:19:54.019 "auth": { 00:19:54.019 "state": "completed", 00:19:54.019 "digest": "sha256", 00:19:54.019 "dhgroup": "ffdhe6144" 00:19:54.019 } 00:19:54.019 } 00:19:54.019 ]' 00:19:54.019 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.019 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.019 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.019 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:54.019 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.278 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.278 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.278 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.536 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:19:54.536 03:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:19:55.472 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.472 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.472 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.472 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.472 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.472 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.472 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:55.472 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:55.730 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:55.730 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.730 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.730 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:55.730 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:55.730 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.730 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:55.730 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.730 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.730 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.730 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:55.730 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:55.730 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.297 00:19:56.297 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.297 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.297 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.555 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.555 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.555 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.555 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.555 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.555 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.555 { 00:19:56.555 "cntlid": 39, 00:19:56.555 "qid": 0, 00:19:56.555 "state": "enabled", 00:19:56.555 "thread": "nvmf_tgt_poll_group_000", 00:19:56.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:56.555 "listen_address": { 00:19:56.555 "trtype": "TCP", 00:19:56.555 "adrfam": "IPv4", 00:19:56.555 "traddr": "10.0.0.2", 00:19:56.555 "trsvcid": "4420" 00:19:56.555 }, 00:19:56.555 "peer_address": { 00:19:56.555 "trtype": "TCP", 00:19:56.555 "adrfam": "IPv4", 00:19:56.555 "traddr": "10.0.0.1", 00:19:56.555 "trsvcid": "40428" 00:19:56.555 }, 00:19:56.555 "auth": { 00:19:56.555 "state": "completed", 00:19:56.555 "digest": "sha256", 00:19:56.555 "dhgroup": "ffdhe6144" 00:19:56.555 } 00:19:56.555 } 00:19:56.555 ]' 00:19:56.555 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.555 03:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.555 03:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.555 03:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:56.555 03:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.555 03:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.555 03:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.555 03:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.814 03:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:19:56.814 03:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:19:57.746 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.746 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.746 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.746 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.746 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.746 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.746 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.746 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:57.746 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.004 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:58.004 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.004 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.004 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:58.004 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:58.004 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.004 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.004 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.004 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.004 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.004 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.004 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.004 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.936 00:19:58.936 03:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.936 03:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.936 03:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.194 03:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.194 03:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.194 03:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.194 03:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.194 03:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.194 03:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.194 { 00:19:59.194 "cntlid": 41, 00:19:59.194 "qid": 0, 00:19:59.194 "state": "enabled", 00:19:59.194 "thread": "nvmf_tgt_poll_group_000", 00:19:59.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:59.194 "listen_address": { 00:19:59.194 "trtype": "TCP", 00:19:59.194 "adrfam": "IPv4", 00:19:59.194 "traddr": "10.0.0.2", 00:19:59.194 "trsvcid": "4420" 00:19:59.194 }, 00:19:59.194 "peer_address": { 00:19:59.194 "trtype": "TCP", 00:19:59.194 "adrfam": "IPv4", 00:19:59.194 "traddr": "10.0.0.1", 00:19:59.194 "trsvcid": "43178" 00:19:59.194 }, 00:19:59.194 "auth": { 00:19:59.194 "state": "completed", 00:19:59.194 "digest": "sha256", 00:19:59.194 "dhgroup": "ffdhe8192" 00:19:59.194 } 00:19:59.194 } 00:19:59.194 ]' 00:19:59.194 03:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.194 03:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.194 03:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.194 03:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.194 03:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.194 03:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.195 03:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.195 03:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.454 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:19:59.454 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:20:00.389 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.389 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.389 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.389 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.389 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.389 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.389 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.389 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.956 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:00.956 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.956 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.956 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:00.956 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:00.956 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.956 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.956 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.956 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.956 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.956 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.956 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.956 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.523 00:20:01.523 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.523 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.523 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.090 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.090 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.090 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.090 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.090 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.090 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.090 { 00:20:02.090 "cntlid": 43, 00:20:02.090 "qid": 0, 00:20:02.090 "state": "enabled", 00:20:02.090 "thread": "nvmf_tgt_poll_group_000", 00:20:02.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:02.090 "listen_address": { 00:20:02.090 "trtype": "TCP", 00:20:02.090 "adrfam": "IPv4", 00:20:02.090 "traddr": "10.0.0.2", 00:20:02.090 "trsvcid": "4420" 00:20:02.090 }, 00:20:02.090 "peer_address": { 00:20:02.090 "trtype": "TCP", 00:20:02.090 "adrfam": "IPv4", 00:20:02.090 "traddr": "10.0.0.1", 00:20:02.090 "trsvcid": "43210" 00:20:02.090 }, 00:20:02.090 "auth": { 00:20:02.090 "state": "completed", 00:20:02.090 "digest": "sha256", 00:20:02.090 "dhgroup": "ffdhe8192" 00:20:02.090 } 00:20:02.090 } 00:20:02.090 ]' 00:20:02.090 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.090 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.090 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.090 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.090 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.090 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.090 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.090 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.348 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:20:02.348 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:20:03.282 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.282 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.282 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.282 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.282 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.282 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.282 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.282 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.540 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:03.540 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.540 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.540 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:03.540 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:03.540 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.540 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.540 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.540 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.540 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.540 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.540 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.540 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.474 00:20:04.474 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.474 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.474 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.732 03:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.732 03:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.732 03:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.732 03:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.732 03:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.732 03:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.732 { 00:20:04.732 "cntlid": 45, 00:20:04.732 "qid": 0, 00:20:04.732 "state": "enabled", 00:20:04.732 "thread": "nvmf_tgt_poll_group_000", 00:20:04.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:04.732 "listen_address": { 00:20:04.732 "trtype": "TCP", 00:20:04.732 "adrfam": "IPv4", 00:20:04.732 "traddr": "10.0.0.2", 00:20:04.732 "trsvcid": "4420" 00:20:04.732 }, 00:20:04.732 "peer_address": { 00:20:04.732 "trtype": "TCP", 00:20:04.732 "adrfam": "IPv4", 00:20:04.732 "traddr": "10.0.0.1", 00:20:04.732 "trsvcid": "43232" 00:20:04.732 }, 00:20:04.732 "auth": { 00:20:04.732 "state": "completed", 00:20:04.732 "digest": "sha256", 00:20:04.732 "dhgroup": "ffdhe8192" 00:20:04.732 } 00:20:04.732 } 00:20:04.732 ]' 00:20:04.732 03:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.732 03:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.732 03:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.732 03:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.732 03:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.732 03:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.732 03:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.732 03:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.990 03:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:20:04.990 03:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:20:05.924 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.924 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.924 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.924 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.924 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.924 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.924 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.924 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:06.182 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:06.182 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.182 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.182 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:06.182 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:06.182 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.182 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:06.182 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.182 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.182 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.182 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:06.182 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:06.182 03:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.117 00:20:07.117 03:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.117 03:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.117 03:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.376 03:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.376 03:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.376 03:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.376 03:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.376 03:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.376 03:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.376 { 00:20:07.376 "cntlid": 47, 00:20:07.376 "qid": 0, 00:20:07.376 "state": "enabled", 00:20:07.376 "thread": "nvmf_tgt_poll_group_000", 00:20:07.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:07.376 "listen_address": { 00:20:07.376 "trtype": "TCP", 00:20:07.376 "adrfam": "IPv4", 00:20:07.376 "traddr": "10.0.0.2", 00:20:07.376 "trsvcid": "4420" 00:20:07.376 }, 00:20:07.376 "peer_address": { 00:20:07.376 "trtype": "TCP", 00:20:07.376 "adrfam": "IPv4", 00:20:07.376 "traddr": "10.0.0.1", 00:20:07.376 "trsvcid": "43260" 00:20:07.376 }, 00:20:07.376 "auth": { 00:20:07.376 "state": "completed", 00:20:07.376 "digest": "sha256", 00:20:07.376 "dhgroup": "ffdhe8192" 00:20:07.376 } 00:20:07.376 } 00:20:07.376 ]' 00:20:07.376 03:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.376 03:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.376 03:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.376 03:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:07.376 03:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.376 03:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.376 03:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.376 03:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.635 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:20:07.635 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:20:08.571 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.571 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.571 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.571 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.571 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.571 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:08.571 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.571 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.571 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:08.571 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:08.828 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:08.828 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.828 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:08.828 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:08.828 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:08.828 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.828 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.828 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.828 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.828 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.828 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.828 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.828 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.393 00:20:09.393 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.393 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.393 03:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.393 03:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.393 03:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.393 03:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.393 03:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.651 03:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.651 03:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.651 { 00:20:09.651 "cntlid": 49, 00:20:09.651 "qid": 0, 00:20:09.651 "state": "enabled", 00:20:09.651 "thread": "nvmf_tgt_poll_group_000", 00:20:09.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:09.651 "listen_address": { 00:20:09.651 "trtype": "TCP", 00:20:09.651 "adrfam": "IPv4", 00:20:09.651 "traddr": "10.0.0.2", 00:20:09.651 "trsvcid": "4420" 00:20:09.651 }, 00:20:09.651 "peer_address": { 00:20:09.651 "trtype": "TCP", 00:20:09.651 "adrfam": "IPv4", 00:20:09.651 "traddr": "10.0.0.1", 00:20:09.651 "trsvcid": "39496" 00:20:09.651 }, 00:20:09.651 "auth": { 00:20:09.651 "state": "completed", 00:20:09.651 "digest": "sha384", 00:20:09.651 "dhgroup": "null" 00:20:09.651 } 00:20:09.651 } 00:20:09.651 ]' 00:20:09.651 03:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.651 03:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.651 03:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.651 03:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:09.651 03:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.651 03:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.651 03:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.651 03:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.910 03:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:20:09.910 03:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:20:10.844 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.844 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.844 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.844 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.844 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.844 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.844 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:10.844 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:11.103 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:11.103 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.103 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.103 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:11.103 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:11.103 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.103 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.103 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.103 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.103 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.103 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.103 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.103 03:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.670 00:20:11.670 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.670 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.670 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.670 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.670 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.670 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.670 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.670 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.670 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.670 { 00:20:11.670 "cntlid": 51, 00:20:11.670 "qid": 0, 00:20:11.670 "state": "enabled", 00:20:11.670 "thread": "nvmf_tgt_poll_group_000", 00:20:11.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:11.670 "listen_address": { 00:20:11.670 "trtype": "TCP", 00:20:11.670 "adrfam": "IPv4", 00:20:11.670 "traddr": "10.0.0.2", 00:20:11.670 "trsvcid": "4420" 00:20:11.670 }, 00:20:11.670 "peer_address": { 00:20:11.670 "trtype": "TCP", 00:20:11.670 "adrfam": "IPv4", 00:20:11.670 "traddr": "10.0.0.1", 00:20:11.670 "trsvcid": "39524" 00:20:11.670 }, 00:20:11.670 "auth": { 00:20:11.670 "state": "completed", 00:20:11.670 "digest": "sha384", 00:20:11.670 "dhgroup": "null" 00:20:11.670 } 00:20:11.670 } 00:20:11.670 ]' 00:20:11.670 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.929 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.929 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.929 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:11.929 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.929 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.929 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.929 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.188 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:20:12.188 03:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:20:13.124 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.124 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.124 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.124 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.124 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.124 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.124 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:13.124 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:13.382 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:13.382 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.382 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:13.382 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:13.382 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:13.382 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.382 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.382 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.382 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.382 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.382 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.382 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.382 03:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.949 00:20:13.949 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.949 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.949 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.949 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.949 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.949 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.949 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.949 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.949 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.949 { 00:20:13.949 "cntlid": 53, 00:20:13.949 "qid": 0, 00:20:13.949 "state": "enabled", 00:20:13.949 "thread": "nvmf_tgt_poll_group_000", 00:20:13.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:13.949 "listen_address": { 00:20:13.949 "trtype": "TCP", 00:20:13.949 "adrfam": "IPv4", 00:20:13.949 "traddr": "10.0.0.2", 00:20:13.949 "trsvcid": "4420" 00:20:13.949 }, 00:20:13.949 "peer_address": { 00:20:13.949 "trtype": "TCP", 00:20:13.949 "adrfam": "IPv4", 00:20:13.949 "traddr": "10.0.0.1", 00:20:13.949 "trsvcid": "39554" 00:20:13.949 }, 00:20:13.949 "auth": { 00:20:13.949 "state": "completed", 00:20:13.949 "digest": "sha384", 00:20:13.949 "dhgroup": "null" 00:20:13.949 } 00:20:13.949 } 00:20:13.949 ]' 00:20:13.949 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.208 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.208 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.208 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:14.208 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.208 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.208 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.208 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.466 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:20:14.466 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:20:15.402 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.402 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.402 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.402 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.402 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.402 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.402 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:15.402 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:15.661 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:15.661 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.661 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:15.661 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:15.661 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:15.661 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.661 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:15.661 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.661 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.661 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.661 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:15.661 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:15.661 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:15.919 00:20:15.919 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.919 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.919 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.177 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.177 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.177 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.177 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.177 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.177 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.177 { 00:20:16.177 "cntlid": 55, 00:20:16.177 "qid": 0, 00:20:16.177 "state": "enabled", 00:20:16.177 "thread": "nvmf_tgt_poll_group_000", 00:20:16.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:16.177 "listen_address": { 00:20:16.177 "trtype": "TCP", 00:20:16.177 "adrfam": "IPv4", 00:20:16.177 "traddr": "10.0.0.2", 00:20:16.177 "trsvcid": "4420" 00:20:16.177 }, 00:20:16.177 "peer_address": { 00:20:16.177 "trtype": "TCP", 00:20:16.177 "adrfam": "IPv4", 00:20:16.177 "traddr": "10.0.0.1", 00:20:16.177 "trsvcid": "39592" 00:20:16.177 }, 00:20:16.177 "auth": { 00:20:16.177 "state": "completed", 00:20:16.177 "digest": "sha384", 00:20:16.177 "dhgroup": "null" 00:20:16.177 } 00:20:16.177 } 00:20:16.177 ]' 00:20:16.177 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.177 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.177 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.435 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:16.435 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.435 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.435 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.435 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.694 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:20:16.694 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:20:17.629 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.629 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.629 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.629 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.629 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.629 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.629 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.629 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:17.629 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:17.887 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:17.887 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.887 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:17.887 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:17.887 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:17.887 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.887 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.887 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.887 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.887 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.887 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.887 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.887 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.145 00:20:18.145 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.145 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.145 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.403 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.403 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.403 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.403 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.403 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.403 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.403 { 00:20:18.403 "cntlid": 57, 00:20:18.403 "qid": 0, 00:20:18.403 "state": "enabled", 00:20:18.403 "thread": "nvmf_tgt_poll_group_000", 00:20:18.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:18.403 "listen_address": { 00:20:18.403 "trtype": "TCP", 00:20:18.403 "adrfam": "IPv4", 00:20:18.403 "traddr": "10.0.0.2", 00:20:18.403 "trsvcid": "4420" 00:20:18.403 }, 00:20:18.403 "peer_address": { 00:20:18.403 "trtype": "TCP", 00:20:18.403 "adrfam": "IPv4", 00:20:18.403 "traddr": "10.0.0.1", 00:20:18.403 "trsvcid": "35088" 00:20:18.403 }, 00:20:18.403 "auth": { 00:20:18.403 "state": "completed", 00:20:18.403 "digest": "sha384", 00:20:18.403 "dhgroup": "ffdhe2048" 00:20:18.403 } 00:20:18.403 } 00:20:18.403 ]' 00:20:18.403 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.403 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.403 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.661 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:18.661 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.661 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.661 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.661 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.919 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:20:18.919 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:20:19.854 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.854 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.854 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.854 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.854 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.854 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.854 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:19.854 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:20.112 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:20.112 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.112 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.112 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:20.112 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:20.112 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.112 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.112 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.112 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.112 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.112 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.112 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.112 03:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.370 00:20:20.628 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.628 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.628 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.887 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.887 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.887 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.887 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.887 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.887 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.887 { 00:20:20.887 "cntlid": 59, 00:20:20.887 "qid": 0, 00:20:20.887 "state": "enabled", 00:20:20.887 "thread": "nvmf_tgt_poll_group_000", 00:20:20.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:20.887 "listen_address": { 00:20:20.887 "trtype": "TCP", 00:20:20.887 "adrfam": "IPv4", 00:20:20.887 "traddr": "10.0.0.2", 00:20:20.887 "trsvcid": "4420" 00:20:20.887 }, 00:20:20.887 "peer_address": { 00:20:20.887 "trtype": "TCP", 00:20:20.887 "adrfam": "IPv4", 00:20:20.887 "traddr": "10.0.0.1", 00:20:20.887 "trsvcid": "35116" 00:20:20.887 }, 00:20:20.887 "auth": { 00:20:20.887 "state": "completed", 00:20:20.887 "digest": "sha384", 00:20:20.887 "dhgroup": "ffdhe2048" 00:20:20.887 } 00:20:20.887 } 00:20:20.887 ]' 00:20:20.887 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.887 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.887 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.887 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:20.887 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.887 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.887 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.887 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.146 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:20:21.146 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:20:22.080 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.080 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.080 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.080 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.080 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.080 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.080 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.080 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.340 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:22.340 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.340 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.340 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:22.340 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:22.340 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.340 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.340 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.340 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.340 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.340 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.340 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.340 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.907 00:20:22.907 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.907 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.907 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.165 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.165 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.165 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.165 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.165 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.165 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.165 { 00:20:23.165 "cntlid": 61, 00:20:23.165 "qid": 0, 00:20:23.165 "state": "enabled", 00:20:23.165 "thread": "nvmf_tgt_poll_group_000", 00:20:23.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:23.165 "listen_address": { 00:20:23.165 "trtype": "TCP", 00:20:23.165 "adrfam": "IPv4", 00:20:23.165 "traddr": "10.0.0.2", 00:20:23.165 "trsvcid": "4420" 00:20:23.165 }, 00:20:23.165 "peer_address": { 00:20:23.165 "trtype": "TCP", 00:20:23.165 "adrfam": "IPv4", 00:20:23.165 "traddr": "10.0.0.1", 00:20:23.165 "trsvcid": "35140" 00:20:23.165 }, 00:20:23.165 "auth": { 00:20:23.165 "state": "completed", 00:20:23.165 "digest": "sha384", 00:20:23.165 "dhgroup": "ffdhe2048" 00:20:23.165 } 00:20:23.165 } 00:20:23.165 ]' 00:20:23.165 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.165 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.165 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.166 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.166 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.166 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.166 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.166 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.424 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:20:23.424 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:20:24.360 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.360 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.360 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.360 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.360 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.360 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.360 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:24.360 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:24.618 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:24.618 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.618 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.618 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:24.618 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:24.618 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.618 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:24.618 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.618 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.876 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.876 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:24.876 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.876 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.134 00:20:25.134 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.134 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.134 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.392 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.392 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.392 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.392 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.392 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.392 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.392 { 00:20:25.392 "cntlid": 63, 00:20:25.392 "qid": 0, 00:20:25.392 "state": "enabled", 00:20:25.392 "thread": "nvmf_tgt_poll_group_000", 00:20:25.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:25.392 "listen_address": { 00:20:25.392 "trtype": "TCP", 00:20:25.392 "adrfam": "IPv4", 00:20:25.392 "traddr": "10.0.0.2", 00:20:25.392 "trsvcid": "4420" 00:20:25.392 }, 00:20:25.392 "peer_address": { 00:20:25.392 "trtype": "TCP", 00:20:25.392 "adrfam": "IPv4", 00:20:25.392 "traddr": "10.0.0.1", 00:20:25.392 "trsvcid": "35160" 00:20:25.392 }, 00:20:25.392 "auth": { 00:20:25.392 "state": "completed", 00:20:25.392 "digest": "sha384", 00:20:25.392 "dhgroup": "ffdhe2048" 00:20:25.392 } 00:20:25.392 } 00:20:25.392 ]' 00:20:25.392 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.392 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.392 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.392 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:25.392 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.651 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.651 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.651 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.909 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:20:25.909 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:20:26.996 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.996 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.996 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.996 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.996 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.996 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.996 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.996 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:26.996 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:26.996 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:26.996 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.996 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:26.996 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:26.996 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:26.997 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.997 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.997 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.997 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.997 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.997 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.997 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.997 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.276 00:20:27.276 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.276 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.276 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.561 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.561 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.561 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.561 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.561 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.561 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.561 { 00:20:27.561 "cntlid": 65, 00:20:27.561 "qid": 0, 00:20:27.561 "state": "enabled", 00:20:27.561 "thread": "nvmf_tgt_poll_group_000", 00:20:27.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:27.561 "listen_address": { 00:20:27.561 "trtype": "TCP", 00:20:27.561 "adrfam": "IPv4", 00:20:27.561 "traddr": "10.0.0.2", 00:20:27.561 "trsvcid": "4420" 00:20:27.561 }, 00:20:27.561 "peer_address": { 00:20:27.561 "trtype": "TCP", 00:20:27.561 "adrfam": "IPv4", 00:20:27.561 "traddr": "10.0.0.1", 00:20:27.561 "trsvcid": "38704" 00:20:27.561 }, 00:20:27.561 "auth": { 00:20:27.561 "state": "completed", 00:20:27.561 "digest": "sha384", 00:20:27.561 "dhgroup": "ffdhe3072" 00:20:27.561 } 00:20:27.561 } 00:20:27.561 ]' 00:20:27.561 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.867 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.867 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.867 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:27.867 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.867 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.867 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.867 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.177 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:20:28.177 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:20:29.126 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.126 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.126 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.126 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.126 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.126 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.126 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.126 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.126 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:29.126 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.126 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.126 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:29.126 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:29.126 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.126 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.126 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.126 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.385 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.385 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.385 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.385 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.643 00:20:29.643 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.643 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.643 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.901 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.901 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.901 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.901 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.901 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.901 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.901 { 00:20:29.901 "cntlid": 67, 00:20:29.901 "qid": 0, 00:20:29.901 "state": "enabled", 00:20:29.901 "thread": "nvmf_tgt_poll_group_000", 00:20:29.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:29.901 "listen_address": { 00:20:29.901 "trtype": "TCP", 00:20:29.901 "adrfam": "IPv4", 00:20:29.901 "traddr": "10.0.0.2", 00:20:29.901 "trsvcid": "4420" 00:20:29.901 }, 00:20:29.901 "peer_address": { 00:20:29.901 "trtype": "TCP", 00:20:29.901 "adrfam": "IPv4", 00:20:29.901 "traddr": "10.0.0.1", 00:20:29.901 "trsvcid": "38740" 00:20:29.901 }, 00:20:29.901 "auth": { 00:20:29.901 "state": "completed", 00:20:29.901 "digest": "sha384", 00:20:29.901 "dhgroup": "ffdhe3072" 00:20:29.901 } 00:20:29.901 } 00:20:29.901 ]' 00:20:29.901 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.901 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.901 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.901 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.901 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.902 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.902 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.902 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.468 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:20:30.468 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:20:31.034 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.292 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.292 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.292 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.292 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.292 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.292 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:31.292 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:31.550 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:31.550 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.550 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.550 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:31.550 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:31.550 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.550 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.550 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.550 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.550 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.550 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.550 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.550 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.808 00:20:31.808 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.808 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.808 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.066 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.066 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.066 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.066 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.066 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.066 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.066 { 00:20:32.066 "cntlid": 69, 00:20:32.066 "qid": 0, 00:20:32.066 "state": "enabled", 00:20:32.066 "thread": "nvmf_tgt_poll_group_000", 00:20:32.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:32.066 "listen_address": { 00:20:32.066 "trtype": "TCP", 00:20:32.066 "adrfam": "IPv4", 00:20:32.066 "traddr": "10.0.0.2", 00:20:32.066 "trsvcid": "4420" 00:20:32.066 }, 00:20:32.066 "peer_address": { 00:20:32.066 "trtype": "TCP", 00:20:32.066 "adrfam": "IPv4", 00:20:32.066 "traddr": "10.0.0.1", 00:20:32.066 "trsvcid": "38776" 00:20:32.066 }, 00:20:32.066 "auth": { 00:20:32.066 "state": "completed", 00:20:32.066 "digest": "sha384", 00:20:32.066 "dhgroup": "ffdhe3072" 00:20:32.066 } 00:20:32.066 } 00:20:32.066 ]' 00:20:32.066 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.066 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.066 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.066 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.066 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.325 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.325 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.325 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.583 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:20:32.583 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:20:33.517 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.517 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.517 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.517 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.517 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.517 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.517 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.517 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.775 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:33.775 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.775 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.775 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:33.775 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:33.775 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.775 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:33.775 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.775 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.775 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.775 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:33.775 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.775 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.033 00:20:34.033 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.033 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.033 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.292 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.292 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.292 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.292 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.292 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.292 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.292 { 00:20:34.292 "cntlid": 71, 00:20:34.292 "qid": 0, 00:20:34.292 "state": "enabled", 00:20:34.292 "thread": "nvmf_tgt_poll_group_000", 00:20:34.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:34.292 "listen_address": { 00:20:34.292 "trtype": "TCP", 00:20:34.292 "adrfam": "IPv4", 00:20:34.292 "traddr": "10.0.0.2", 00:20:34.292 "trsvcid": "4420" 00:20:34.292 }, 00:20:34.292 "peer_address": { 00:20:34.292 "trtype": "TCP", 00:20:34.292 "adrfam": "IPv4", 00:20:34.292 "traddr": "10.0.0.1", 00:20:34.292 "trsvcid": "38792" 00:20:34.292 }, 00:20:34.292 "auth": { 00:20:34.292 "state": "completed", 00:20:34.292 "digest": "sha384", 00:20:34.292 "dhgroup": "ffdhe3072" 00:20:34.292 } 00:20:34.292 } 00:20:34.292 ]' 00:20:34.292 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.292 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.292 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.292 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:34.292 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.292 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.292 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.292 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.858 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:20:34.858 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.792 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.793 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.793 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.793 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.793 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.357 00:20:36.357 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.357 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.357 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.614 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.614 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.614 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.614 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.614 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.614 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.614 { 00:20:36.614 "cntlid": 73, 00:20:36.614 "qid": 0, 00:20:36.614 "state": "enabled", 00:20:36.614 "thread": "nvmf_tgt_poll_group_000", 00:20:36.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:36.614 "listen_address": { 00:20:36.614 "trtype": "TCP", 00:20:36.614 "adrfam": "IPv4", 00:20:36.614 "traddr": "10.0.0.2", 00:20:36.614 "trsvcid": "4420" 00:20:36.614 }, 00:20:36.614 "peer_address": { 00:20:36.614 "trtype": "TCP", 00:20:36.614 "adrfam": "IPv4", 00:20:36.614 "traddr": "10.0.0.1", 00:20:36.614 "trsvcid": "38822" 00:20:36.614 }, 00:20:36.614 "auth": { 00:20:36.614 "state": "completed", 00:20:36.614 "digest": "sha384", 00:20:36.614 "dhgroup": "ffdhe4096" 00:20:36.614 } 00:20:36.614 } 00:20:36.614 ]' 00:20:36.614 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.614 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.614 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.614 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:36.614 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.614 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.614 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.614 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.872 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:20:36.872 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:20:37.805 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.805 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.805 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.805 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.805 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.805 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.805 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.805 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.063 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:38.063 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.063 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.063 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:38.063 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:38.063 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.063 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.063 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.063 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.063 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.063 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.063 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.063 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.628 00:20:38.628 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.628 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.628 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.885 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.885 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.885 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.885 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.885 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.885 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.885 { 00:20:38.885 "cntlid": 75, 00:20:38.885 "qid": 0, 00:20:38.885 "state": "enabled", 00:20:38.885 "thread": "nvmf_tgt_poll_group_000", 00:20:38.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:38.885 "listen_address": { 00:20:38.885 "trtype": "TCP", 00:20:38.885 "adrfam": "IPv4", 00:20:38.885 "traddr": "10.0.0.2", 00:20:38.885 "trsvcid": "4420" 00:20:38.885 }, 00:20:38.885 "peer_address": { 00:20:38.885 "trtype": "TCP", 00:20:38.885 "adrfam": "IPv4", 00:20:38.885 "traddr": "10.0.0.1", 00:20:38.885 "trsvcid": "55194" 00:20:38.885 }, 00:20:38.885 "auth": { 00:20:38.885 "state": "completed", 00:20:38.885 "digest": "sha384", 00:20:38.885 "dhgroup": "ffdhe4096" 00:20:38.885 } 00:20:38.885 } 00:20:38.885 ]' 00:20:38.885 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.885 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.885 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.885 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:38.885 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.885 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.885 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.885 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.142 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:20:39.142 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:20:40.076 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.076 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.076 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.076 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.076 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.076 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.076 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:40.076 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:40.334 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:40.334 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.334 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.334 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:40.334 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:40.334 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.334 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.334 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.334 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.334 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.334 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.334 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.334 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.899 00:20:40.899 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.899 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.899 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.157 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.157 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.157 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.157 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.157 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.157 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.157 { 00:20:41.157 "cntlid": 77, 00:20:41.157 "qid": 0, 00:20:41.157 "state": "enabled", 00:20:41.157 "thread": "nvmf_tgt_poll_group_000", 00:20:41.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:41.157 "listen_address": { 00:20:41.157 "trtype": "TCP", 00:20:41.157 "adrfam": "IPv4", 00:20:41.157 "traddr": "10.0.0.2", 00:20:41.157 "trsvcid": "4420" 00:20:41.157 }, 00:20:41.157 "peer_address": { 00:20:41.157 "trtype": "TCP", 00:20:41.157 "adrfam": "IPv4", 00:20:41.157 "traddr": "10.0.0.1", 00:20:41.157 "trsvcid": "55202" 00:20:41.157 }, 00:20:41.157 "auth": { 00:20:41.157 "state": "completed", 00:20:41.157 "digest": "sha384", 00:20:41.157 "dhgroup": "ffdhe4096" 00:20:41.157 } 00:20:41.157 } 00:20:41.157 ]' 00:20:41.157 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.157 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.157 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.157 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:41.157 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.157 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.157 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.157 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.415 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:20:41.415 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:20:42.348 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.348 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.348 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.348 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.348 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.348 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.348 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:42.348 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:42.606 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:42.606 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.606 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.606 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:42.606 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:42.606 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.606 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:42.607 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.607 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.607 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.607 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:42.607 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.607 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.172 00:20:43.172 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.172 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.172 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.430 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.430 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.430 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.430 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.430 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.430 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.430 { 00:20:43.430 "cntlid": 79, 00:20:43.430 "qid": 0, 00:20:43.430 "state": "enabled", 00:20:43.430 "thread": "nvmf_tgt_poll_group_000", 00:20:43.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:43.430 "listen_address": { 00:20:43.430 "trtype": "TCP", 00:20:43.430 "adrfam": "IPv4", 00:20:43.430 "traddr": "10.0.0.2", 00:20:43.430 "trsvcid": "4420" 00:20:43.430 }, 00:20:43.430 "peer_address": { 00:20:43.430 "trtype": "TCP", 00:20:43.430 "adrfam": "IPv4", 00:20:43.430 "traddr": "10.0.0.1", 00:20:43.430 "trsvcid": "55238" 00:20:43.430 }, 00:20:43.430 "auth": { 00:20:43.430 "state": "completed", 00:20:43.430 "digest": "sha384", 00:20:43.430 "dhgroup": "ffdhe4096" 00:20:43.430 } 00:20:43.430 } 00:20:43.430 ]' 00:20:43.430 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.431 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.431 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.431 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:43.431 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.431 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.431 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.431 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.689 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:20:43.689 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:20:44.622 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.622 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.622 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.622 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.622 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.622 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.622 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.622 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.622 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.880 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:44.880 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.880 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.880 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:44.880 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:44.880 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.880 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.880 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.880 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.880 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.881 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.881 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.881 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.447 00:20:45.447 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.447 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.447 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.705 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.705 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.705 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.705 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.705 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.705 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.705 { 00:20:45.705 "cntlid": 81, 00:20:45.705 "qid": 0, 00:20:45.705 "state": "enabled", 00:20:45.705 "thread": "nvmf_tgt_poll_group_000", 00:20:45.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:45.706 "listen_address": { 00:20:45.706 "trtype": "TCP", 00:20:45.706 "adrfam": "IPv4", 00:20:45.706 "traddr": "10.0.0.2", 00:20:45.706 "trsvcid": "4420" 00:20:45.706 }, 00:20:45.706 "peer_address": { 00:20:45.706 "trtype": "TCP", 00:20:45.706 "adrfam": "IPv4", 00:20:45.706 "traddr": "10.0.0.1", 00:20:45.706 "trsvcid": "55276" 00:20:45.706 }, 00:20:45.706 "auth": { 00:20:45.706 "state": "completed", 00:20:45.706 "digest": "sha384", 00:20:45.706 "dhgroup": "ffdhe6144" 00:20:45.706 } 00:20:45.706 } 00:20:45.706 ]' 00:20:45.706 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.706 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.706 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.964 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.964 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.964 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.964 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.964 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.222 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:20:46.222 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:20:47.156 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.156 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.156 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.156 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.156 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.156 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.156 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.156 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.414 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:47.414 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.414 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.414 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:47.414 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:47.414 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.414 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.414 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.414 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.414 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.414 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.414 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.414 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.980 00:20:47.980 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.980 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.980 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.238 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.238 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.238 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.238 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.238 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.238 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.238 { 00:20:48.238 "cntlid": 83, 00:20:48.238 "qid": 0, 00:20:48.238 "state": "enabled", 00:20:48.238 "thread": "nvmf_tgt_poll_group_000", 00:20:48.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:48.238 "listen_address": { 00:20:48.238 "trtype": "TCP", 00:20:48.238 "adrfam": "IPv4", 00:20:48.238 "traddr": "10.0.0.2", 00:20:48.238 "trsvcid": "4420" 00:20:48.238 }, 00:20:48.238 "peer_address": { 00:20:48.238 "trtype": "TCP", 00:20:48.238 "adrfam": "IPv4", 00:20:48.238 "traddr": "10.0.0.1", 00:20:48.238 "trsvcid": "36266" 00:20:48.238 }, 00:20:48.238 "auth": { 00:20:48.238 "state": "completed", 00:20:48.238 "digest": "sha384", 00:20:48.238 "dhgroup": "ffdhe6144" 00:20:48.238 } 00:20:48.238 } 00:20:48.238 ]' 00:20:48.238 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.238 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.238 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.238 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:48.238 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.238 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.238 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.238 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.496 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:20:48.496 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:20:49.429 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.429 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.429 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.429 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.429 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.429 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.429 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:49.429 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:49.688 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:49.688 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.688 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.688 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:49.688 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.688 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.688 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.688 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.688 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.688 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.688 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.688 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.688 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.254 00:20:50.254 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.255 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.255 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.512 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.512 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.512 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.512 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.512 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.512 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.512 { 00:20:50.512 "cntlid": 85, 00:20:50.512 "qid": 0, 00:20:50.512 "state": "enabled", 00:20:50.512 "thread": "nvmf_tgt_poll_group_000", 00:20:50.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:50.512 "listen_address": { 00:20:50.512 "trtype": "TCP", 00:20:50.512 "adrfam": "IPv4", 00:20:50.512 "traddr": "10.0.0.2", 00:20:50.512 "trsvcid": "4420" 00:20:50.512 }, 00:20:50.512 "peer_address": { 00:20:50.512 "trtype": "TCP", 00:20:50.512 "adrfam": "IPv4", 00:20:50.512 "traddr": "10.0.0.1", 00:20:50.512 "trsvcid": "36294" 00:20:50.512 }, 00:20:50.512 "auth": { 00:20:50.512 "state": "completed", 00:20:50.512 "digest": "sha384", 00:20:50.512 "dhgroup": "ffdhe6144" 00:20:50.512 } 00:20:50.512 } 00:20:50.512 ]' 00:20:50.512 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.512 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.512 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.512 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:50.512 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.770 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.770 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.771 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.029 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:20:51.029 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:20:51.959 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.959 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.959 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.959 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.959 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.959 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.959 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:51.959 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:52.217 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:52.217 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.217 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.218 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:52.218 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:52.218 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.218 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:52.218 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.218 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.218 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.218 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:52.218 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.218 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.784 00:20:52.784 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.784 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.784 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.042 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.042 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.042 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.042 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.042 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.042 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.042 { 00:20:53.042 "cntlid": 87, 00:20:53.042 "qid": 0, 00:20:53.042 "state": "enabled", 00:20:53.042 "thread": "nvmf_tgt_poll_group_000", 00:20:53.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:53.042 "listen_address": { 00:20:53.042 "trtype": "TCP", 00:20:53.042 "adrfam": "IPv4", 00:20:53.042 "traddr": "10.0.0.2", 00:20:53.042 "trsvcid": "4420" 00:20:53.042 }, 00:20:53.042 "peer_address": { 00:20:53.042 "trtype": "TCP", 00:20:53.042 "adrfam": "IPv4", 00:20:53.042 "traddr": "10.0.0.1", 00:20:53.042 "trsvcid": "36314" 00:20:53.042 }, 00:20:53.042 "auth": { 00:20:53.042 "state": "completed", 00:20:53.042 "digest": "sha384", 00:20:53.042 "dhgroup": "ffdhe6144" 00:20:53.042 } 00:20:53.042 } 00:20:53.042 ]' 00:20:53.042 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.299 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.299 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.299 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:53.299 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.299 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.300 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.300 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.557 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:20:53.557 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:20:54.488 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.488 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.488 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.488 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.488 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.488 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.488 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.488 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.488 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.745 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:54.745 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.745 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.745 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:54.745 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:54.745 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.745 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.745 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.745 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.745 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.745 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.745 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.745 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.677 00:20:55.677 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.677 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.677 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.677 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.677 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.677 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.677 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.677 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.677 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.677 { 00:20:55.677 "cntlid": 89, 00:20:55.677 "qid": 0, 00:20:55.677 "state": "enabled", 00:20:55.677 "thread": "nvmf_tgt_poll_group_000", 00:20:55.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.677 "listen_address": { 00:20:55.677 "trtype": "TCP", 00:20:55.677 "adrfam": "IPv4", 00:20:55.677 "traddr": "10.0.0.2", 00:20:55.677 "trsvcid": "4420" 00:20:55.677 }, 00:20:55.677 "peer_address": { 00:20:55.677 "trtype": "TCP", 00:20:55.677 "adrfam": "IPv4", 00:20:55.677 "traddr": "10.0.0.1", 00:20:55.677 "trsvcid": "36354" 00:20:55.677 }, 00:20:55.677 "auth": { 00:20:55.677 "state": "completed", 00:20:55.677 "digest": "sha384", 00:20:55.677 "dhgroup": "ffdhe8192" 00:20:55.677 } 00:20:55.677 } 00:20:55.677 ]' 00:20:55.678 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.935 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.935 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.935 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.935 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.935 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.935 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.935 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.193 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:20:56.193 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:20:57.126 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.126 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.126 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.126 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.126 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.126 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.126 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.126 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.383 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:57.383 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.383 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.383 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:57.383 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:57.383 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.383 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.384 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.384 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.384 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.384 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.384 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.384 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.317 00:20:58.317 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.317 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.317 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.575 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.575 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.575 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.575 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.575 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.575 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.575 { 00:20:58.575 "cntlid": 91, 00:20:58.575 "qid": 0, 00:20:58.575 "state": "enabled", 00:20:58.575 "thread": "nvmf_tgt_poll_group_000", 00:20:58.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:58.575 "listen_address": { 00:20:58.575 "trtype": "TCP", 00:20:58.575 "adrfam": "IPv4", 00:20:58.575 "traddr": "10.0.0.2", 00:20:58.575 "trsvcid": "4420" 00:20:58.575 }, 00:20:58.575 "peer_address": { 00:20:58.575 "trtype": "TCP", 00:20:58.575 "adrfam": "IPv4", 00:20:58.575 "traddr": "10.0.0.1", 00:20:58.575 "trsvcid": "52668" 00:20:58.575 }, 00:20:58.575 "auth": { 00:20:58.575 "state": "completed", 00:20:58.575 "digest": "sha384", 00:20:58.575 "dhgroup": "ffdhe8192" 00:20:58.575 } 00:20:58.575 } 00:20:58.575 ]' 00:20:58.575 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.575 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.575 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.575 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:58.575 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.575 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.575 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.575 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.831 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:20:58.831 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:20:59.763 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.763 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.763 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.763 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.763 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.763 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.763 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.763 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.022 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:00.022 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.022 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.022 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:00.022 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:00.022 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.022 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.022 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.022 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.022 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.022 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.022 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.022 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.955 00:21:00.955 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.955 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.955 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.213 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.213 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.213 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.213 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.213 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.213 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.213 { 00:21:01.213 "cntlid": 93, 00:21:01.213 "qid": 0, 00:21:01.213 "state": "enabled", 00:21:01.213 "thread": "nvmf_tgt_poll_group_000", 00:21:01.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:01.213 "listen_address": { 00:21:01.213 "trtype": "TCP", 00:21:01.213 "adrfam": "IPv4", 00:21:01.213 "traddr": "10.0.0.2", 00:21:01.213 "trsvcid": "4420" 00:21:01.213 }, 00:21:01.214 "peer_address": { 00:21:01.214 "trtype": "TCP", 00:21:01.214 "adrfam": "IPv4", 00:21:01.214 "traddr": "10.0.0.1", 00:21:01.214 "trsvcid": "52690" 00:21:01.214 }, 00:21:01.214 "auth": { 00:21:01.214 "state": "completed", 00:21:01.214 "digest": "sha384", 00:21:01.214 "dhgroup": "ffdhe8192" 00:21:01.214 } 00:21:01.214 } 00:21:01.214 ]' 00:21:01.214 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.214 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.214 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.214 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:01.214 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.214 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.214 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.214 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.471 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:21:01.472 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:21:02.405 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.405 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.405 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.405 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.405 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.405 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.405 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:02.405 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:02.663 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:02.663 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.663 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.663 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:02.663 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:02.663 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.663 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:02.663 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.663 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.663 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.663 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:02.663 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.663 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.596 00:21:03.596 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.596 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.597 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.854 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.854 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.854 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.854 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.854 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.854 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.854 { 00:21:03.854 "cntlid": 95, 00:21:03.854 "qid": 0, 00:21:03.854 "state": "enabled", 00:21:03.854 "thread": "nvmf_tgt_poll_group_000", 00:21:03.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:03.854 "listen_address": { 00:21:03.854 "trtype": "TCP", 00:21:03.854 "adrfam": "IPv4", 00:21:03.854 "traddr": "10.0.0.2", 00:21:03.854 "trsvcid": "4420" 00:21:03.854 }, 00:21:03.854 "peer_address": { 00:21:03.854 "trtype": "TCP", 00:21:03.854 "adrfam": "IPv4", 00:21:03.854 "traddr": "10.0.0.1", 00:21:03.854 "trsvcid": "52718" 00:21:03.854 }, 00:21:03.854 "auth": { 00:21:03.854 "state": "completed", 00:21:03.854 "digest": "sha384", 00:21:03.854 "dhgroup": "ffdhe8192" 00:21:03.854 } 00:21:03.854 } 00:21:03.854 ]' 00:21:03.854 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.854 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.854 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.854 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:03.854 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.855 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.855 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.855 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.113 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:21:04.113 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:21:05.047 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.047 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.047 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.047 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.047 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.047 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:05.047 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.047 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.047 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:05.047 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:05.305 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:05.305 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.305 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.305 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:05.305 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.305 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.305 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.305 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.305 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.305 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.305 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.305 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.305 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.872 00:21:05.872 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.872 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.872 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.872 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.872 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.872 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.872 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.130 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.130 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.130 { 00:21:06.130 "cntlid": 97, 00:21:06.130 "qid": 0, 00:21:06.130 "state": "enabled", 00:21:06.130 "thread": "nvmf_tgt_poll_group_000", 00:21:06.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:06.130 "listen_address": { 00:21:06.130 "trtype": "TCP", 00:21:06.130 "adrfam": "IPv4", 00:21:06.130 "traddr": "10.0.0.2", 00:21:06.130 "trsvcid": "4420" 00:21:06.130 }, 00:21:06.130 "peer_address": { 00:21:06.130 "trtype": "TCP", 00:21:06.130 "adrfam": "IPv4", 00:21:06.130 "traddr": "10.0.0.1", 00:21:06.130 "trsvcid": "52732" 00:21:06.130 }, 00:21:06.130 "auth": { 00:21:06.130 "state": "completed", 00:21:06.130 "digest": "sha512", 00:21:06.130 "dhgroup": "null" 00:21:06.130 } 00:21:06.130 } 00:21:06.130 ]' 00:21:06.130 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.130 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.130 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.130 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:06.130 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.130 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.130 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.130 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.388 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:21:06.388 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:21:07.323 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.323 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.323 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.323 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.323 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.323 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.323 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:07.323 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:07.581 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:07.581 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.581 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.581 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:07.581 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:07.581 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.581 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.581 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.581 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.581 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.581 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.581 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.581 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.146 00:21:08.146 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.146 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.146 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.404 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.404 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.404 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.404 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.404 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.404 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.404 { 00:21:08.404 "cntlid": 99, 00:21:08.404 "qid": 0, 00:21:08.404 "state": "enabled", 00:21:08.404 "thread": "nvmf_tgt_poll_group_000", 00:21:08.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:08.404 "listen_address": { 00:21:08.404 "trtype": "TCP", 00:21:08.404 "adrfam": "IPv4", 00:21:08.404 "traddr": "10.0.0.2", 00:21:08.404 "trsvcid": "4420" 00:21:08.404 }, 00:21:08.404 "peer_address": { 00:21:08.404 "trtype": "TCP", 00:21:08.404 "adrfam": "IPv4", 00:21:08.404 "traddr": "10.0.0.1", 00:21:08.404 "trsvcid": "35162" 00:21:08.404 }, 00:21:08.404 "auth": { 00:21:08.404 "state": "completed", 00:21:08.404 "digest": "sha512", 00:21:08.404 "dhgroup": "null" 00:21:08.404 } 00:21:08.404 } 00:21:08.404 ]' 00:21:08.404 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.404 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.404 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.404 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:08.404 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.404 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.404 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.404 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.662 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:21:08.662 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:21:09.594 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.594 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.594 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.594 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.594 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.594 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.594 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:09.594 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:09.852 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:09.852 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.852 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.852 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:09.852 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:09.852 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.852 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.852 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.852 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.852 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.852 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.852 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.852 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.417 00:21:10.417 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.417 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.417 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.418 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.418 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.418 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.418 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.676 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.676 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.676 { 00:21:10.676 "cntlid": 101, 00:21:10.676 "qid": 0, 00:21:10.676 "state": "enabled", 00:21:10.676 "thread": "nvmf_tgt_poll_group_000", 00:21:10.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:10.676 "listen_address": { 00:21:10.676 "trtype": "TCP", 00:21:10.676 "adrfam": "IPv4", 00:21:10.676 "traddr": "10.0.0.2", 00:21:10.676 "trsvcid": "4420" 00:21:10.676 }, 00:21:10.676 "peer_address": { 00:21:10.676 "trtype": "TCP", 00:21:10.676 "adrfam": "IPv4", 00:21:10.676 "traddr": "10.0.0.1", 00:21:10.676 "trsvcid": "35184" 00:21:10.676 }, 00:21:10.676 "auth": { 00:21:10.676 "state": "completed", 00:21:10.676 "digest": "sha512", 00:21:10.676 "dhgroup": "null" 00:21:10.676 } 00:21:10.676 } 00:21:10.676 ]' 00:21:10.676 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.676 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.676 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.676 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:10.676 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.676 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.676 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.676 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.934 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:21:10.934 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:21:11.866 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.866 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.866 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.866 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.866 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.866 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.866 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.866 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:12.123 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:12.123 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.123 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.123 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:12.123 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:12.123 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.123 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:12.123 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.124 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.124 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.124 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:12.124 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.124 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.688 00:21:12.688 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.688 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.688 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.688 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.688 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.689 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.689 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.689 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.689 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.689 { 00:21:12.689 "cntlid": 103, 00:21:12.689 "qid": 0, 00:21:12.689 "state": "enabled", 00:21:12.689 "thread": "nvmf_tgt_poll_group_000", 00:21:12.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:12.689 "listen_address": { 00:21:12.689 "trtype": "TCP", 00:21:12.689 "adrfam": "IPv4", 00:21:12.689 "traddr": "10.0.0.2", 00:21:12.689 "trsvcid": "4420" 00:21:12.689 }, 00:21:12.689 "peer_address": { 00:21:12.689 "trtype": "TCP", 00:21:12.689 "adrfam": "IPv4", 00:21:12.689 "traddr": "10.0.0.1", 00:21:12.689 "trsvcid": "35210" 00:21:12.689 }, 00:21:12.689 "auth": { 00:21:12.689 "state": "completed", 00:21:12.689 "digest": "sha512", 00:21:12.689 "dhgroup": "null" 00:21:12.689 } 00:21:12.689 } 00:21:12.689 ]' 00:21:12.947 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.947 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.947 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.947 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:12.947 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.947 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.947 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.947 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.205 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:21:13.205 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:21:14.138 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.138 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.138 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.138 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.138 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.138 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.138 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.138 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.138 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.396 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:14.396 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.396 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.396 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:14.396 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:14.396 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.396 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.396 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.396 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.396 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.396 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.396 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.396 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.653 00:21:14.653 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.653 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.653 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.911 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.911 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.911 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.911 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.911 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.911 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.911 { 00:21:14.911 "cntlid": 105, 00:21:14.911 "qid": 0, 00:21:14.911 "state": "enabled", 00:21:14.911 "thread": "nvmf_tgt_poll_group_000", 00:21:14.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:14.911 "listen_address": { 00:21:14.911 "trtype": "TCP", 00:21:14.911 "adrfam": "IPv4", 00:21:14.911 "traddr": "10.0.0.2", 00:21:14.911 "trsvcid": "4420" 00:21:14.911 }, 00:21:14.911 "peer_address": { 00:21:14.911 "trtype": "TCP", 00:21:14.911 "adrfam": "IPv4", 00:21:14.911 "traddr": "10.0.0.1", 00:21:14.911 "trsvcid": "35234" 00:21:14.911 }, 00:21:14.911 "auth": { 00:21:14.911 "state": "completed", 00:21:14.911 "digest": "sha512", 00:21:14.911 "dhgroup": "ffdhe2048" 00:21:14.911 } 00:21:14.911 } 00:21:14.911 ]' 00:21:14.911 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.911 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.911 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.168 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:15.168 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.168 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.168 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.168 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.425 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:21:15.425 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:21:16.359 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.359 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.359 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.359 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.359 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.359 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.359 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:16.359 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:16.617 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:16.617 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.617 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.617 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:16.617 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:16.617 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.617 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.617 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.617 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.617 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.617 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.617 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.617 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.874 00:21:16.874 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.874 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.874 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.132 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.132 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.132 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.132 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.132 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.132 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.132 { 00:21:17.132 "cntlid": 107, 00:21:17.132 "qid": 0, 00:21:17.132 "state": "enabled", 00:21:17.132 "thread": "nvmf_tgt_poll_group_000", 00:21:17.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:17.132 "listen_address": { 00:21:17.132 "trtype": "TCP", 00:21:17.132 "adrfam": "IPv4", 00:21:17.132 "traddr": "10.0.0.2", 00:21:17.132 "trsvcid": "4420" 00:21:17.132 }, 00:21:17.132 "peer_address": { 00:21:17.132 "trtype": "TCP", 00:21:17.132 "adrfam": "IPv4", 00:21:17.132 "traddr": "10.0.0.1", 00:21:17.132 "trsvcid": "35006" 00:21:17.132 }, 00:21:17.132 "auth": { 00:21:17.132 "state": "completed", 00:21:17.132 "digest": "sha512", 00:21:17.132 "dhgroup": "ffdhe2048" 00:21:17.132 } 00:21:17.132 } 00:21:17.132 ]' 00:21:17.132 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.132 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.132 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.390 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:17.390 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.390 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.390 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.390 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.647 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:21:17.648 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:21:18.580 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.580 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.580 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.580 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.580 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.580 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.580 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:18.580 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:18.836 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:18.836 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.836 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.836 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:18.836 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:18.837 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.837 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.837 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.837 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.837 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.837 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.837 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.837 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.093 00:21:19.093 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.093 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.093 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.351 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.351 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.351 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.351 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.351 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.351 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.351 { 00:21:19.351 "cntlid": 109, 00:21:19.351 "qid": 0, 00:21:19.351 "state": "enabled", 00:21:19.351 "thread": "nvmf_tgt_poll_group_000", 00:21:19.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:19.351 "listen_address": { 00:21:19.351 "trtype": "TCP", 00:21:19.351 "adrfam": "IPv4", 00:21:19.351 "traddr": "10.0.0.2", 00:21:19.351 "trsvcid": "4420" 00:21:19.351 }, 00:21:19.351 "peer_address": { 00:21:19.351 "trtype": "TCP", 00:21:19.351 "adrfam": "IPv4", 00:21:19.351 "traddr": "10.0.0.1", 00:21:19.351 "trsvcid": "35040" 00:21:19.351 }, 00:21:19.351 "auth": { 00:21:19.351 "state": "completed", 00:21:19.351 "digest": "sha512", 00:21:19.351 "dhgroup": "ffdhe2048" 00:21:19.351 } 00:21:19.351 } 00:21:19.351 ]' 00:21:19.351 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.609 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.609 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.609 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:19.609 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.609 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.609 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.609 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.867 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:21:19.867 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:21:20.800 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.800 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.800 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.800 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.800 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.800 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.800 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:20.800 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:21.058 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:21.058 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.058 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.058 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:21.058 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:21.058 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.058 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:21.058 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.058 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.058 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.058 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:21.058 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.058 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.315 00:21:21.315 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.315 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.315 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.574 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.574 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.574 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.574 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.574 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.574 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.574 { 00:21:21.574 "cntlid": 111, 00:21:21.574 "qid": 0, 00:21:21.574 "state": "enabled", 00:21:21.574 "thread": "nvmf_tgt_poll_group_000", 00:21:21.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:21.574 "listen_address": { 00:21:21.574 "trtype": "TCP", 00:21:21.574 "adrfam": "IPv4", 00:21:21.574 "traddr": "10.0.0.2", 00:21:21.574 "trsvcid": "4420" 00:21:21.574 }, 00:21:21.574 "peer_address": { 00:21:21.574 "trtype": "TCP", 00:21:21.574 "adrfam": "IPv4", 00:21:21.574 "traddr": "10.0.0.1", 00:21:21.574 "trsvcid": "35072" 00:21:21.574 }, 00:21:21.574 "auth": { 00:21:21.574 "state": "completed", 00:21:21.574 "digest": "sha512", 00:21:21.574 "dhgroup": "ffdhe2048" 00:21:21.574 } 00:21:21.574 } 00:21:21.574 ]' 00:21:21.574 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.574 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.574 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.832 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:21.832 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.832 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.832 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.832 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.090 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:21:22.090 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:21:23.022 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.022 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.022 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.022 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.022 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.022 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.022 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.022 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.023 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.280 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:23.280 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.280 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.280 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:23.280 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:23.280 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.280 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.280 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.280 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.280 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.280 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.280 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.280 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.538 00:21:23.538 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.538 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.538 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.796 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.796 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.796 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.796 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.796 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.796 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.796 { 00:21:23.796 "cntlid": 113, 00:21:23.796 "qid": 0, 00:21:23.796 "state": "enabled", 00:21:23.796 "thread": "nvmf_tgt_poll_group_000", 00:21:23.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:23.796 "listen_address": { 00:21:23.796 "trtype": "TCP", 00:21:23.796 "adrfam": "IPv4", 00:21:23.796 "traddr": "10.0.0.2", 00:21:23.796 "trsvcid": "4420" 00:21:23.796 }, 00:21:23.796 "peer_address": { 00:21:23.796 "trtype": "TCP", 00:21:23.796 "adrfam": "IPv4", 00:21:23.796 "traddr": "10.0.0.1", 00:21:23.796 "trsvcid": "35098" 00:21:23.796 }, 00:21:23.796 "auth": { 00:21:23.796 "state": "completed", 00:21:23.796 "digest": "sha512", 00:21:23.796 "dhgroup": "ffdhe3072" 00:21:23.796 } 00:21:23.796 } 00:21:23.796 ]' 00:21:23.796 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.053 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.053 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.053 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:24.053 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.054 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.054 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.054 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.312 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:21:24.312 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:21:25.245 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.245 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.245 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.245 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.245 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.245 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.245 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:25.245 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:25.503 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:25.503 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.503 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.503 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:25.503 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:25.503 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.504 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.504 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.504 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.504 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.504 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.504 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.504 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.069 00:21:26.069 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.069 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.069 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.327 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.327 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.327 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.327 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.327 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.327 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.327 { 00:21:26.327 "cntlid": 115, 00:21:26.327 "qid": 0, 00:21:26.327 "state": "enabled", 00:21:26.327 "thread": "nvmf_tgt_poll_group_000", 00:21:26.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:26.327 "listen_address": { 00:21:26.327 "trtype": "TCP", 00:21:26.327 "adrfam": "IPv4", 00:21:26.327 "traddr": "10.0.0.2", 00:21:26.327 "trsvcid": "4420" 00:21:26.327 }, 00:21:26.327 "peer_address": { 00:21:26.327 "trtype": "TCP", 00:21:26.327 "adrfam": "IPv4", 00:21:26.327 "traddr": "10.0.0.1", 00:21:26.327 "trsvcid": "35114" 00:21:26.327 }, 00:21:26.327 "auth": { 00:21:26.327 "state": "completed", 00:21:26.327 "digest": "sha512", 00:21:26.327 "dhgroup": "ffdhe3072" 00:21:26.327 } 00:21:26.327 } 00:21:26.327 ]' 00:21:26.327 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.327 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.327 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.327 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:26.327 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.327 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.327 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.327 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.584 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:21:26.585 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:21:27.516 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.516 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.516 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.516 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.516 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.516 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.516 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:27.516 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:27.776 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:27.776 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.776 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.776 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:27.776 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:27.776 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.776 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.776 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.776 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.776 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.776 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.776 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.776 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.034 00:21:28.034 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.034 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.034 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.292 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.292 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.292 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.292 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.550 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.550 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.550 { 00:21:28.550 "cntlid": 117, 00:21:28.550 "qid": 0, 00:21:28.550 "state": "enabled", 00:21:28.550 "thread": "nvmf_tgt_poll_group_000", 00:21:28.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:28.550 "listen_address": { 00:21:28.550 "trtype": "TCP", 00:21:28.550 "adrfam": "IPv4", 00:21:28.550 "traddr": "10.0.0.2", 00:21:28.550 "trsvcid": "4420" 00:21:28.550 }, 00:21:28.550 "peer_address": { 00:21:28.550 "trtype": "TCP", 00:21:28.550 "adrfam": "IPv4", 00:21:28.550 "traddr": "10.0.0.1", 00:21:28.550 "trsvcid": "54480" 00:21:28.550 }, 00:21:28.550 "auth": { 00:21:28.550 "state": "completed", 00:21:28.550 "digest": "sha512", 00:21:28.550 "dhgroup": "ffdhe3072" 00:21:28.550 } 00:21:28.550 } 00:21:28.550 ]' 00:21:28.550 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.550 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.550 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.550 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:28.550 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.550 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.550 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.550 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.808 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:21:28.808 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:21:29.740 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.740 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.740 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.740 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.740 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.740 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.740 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:29.740 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:29.997 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:29.997 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.998 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.998 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:29.998 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:29.998 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.998 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:29.998 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.998 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.998 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.998 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:29.998 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:29.998 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.563 00:21:30.563 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.563 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.563 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.821 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.821 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.822 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.822 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.822 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.822 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.822 { 00:21:30.822 "cntlid": 119, 00:21:30.822 "qid": 0, 00:21:30.822 "state": "enabled", 00:21:30.822 "thread": "nvmf_tgt_poll_group_000", 00:21:30.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:30.822 "listen_address": { 00:21:30.822 "trtype": "TCP", 00:21:30.822 "adrfam": "IPv4", 00:21:30.822 "traddr": "10.0.0.2", 00:21:30.822 "trsvcid": "4420" 00:21:30.822 }, 00:21:30.822 "peer_address": { 00:21:30.822 "trtype": "TCP", 00:21:30.822 "adrfam": "IPv4", 00:21:30.822 "traddr": "10.0.0.1", 00:21:30.822 "trsvcid": "54510" 00:21:30.822 }, 00:21:30.822 "auth": { 00:21:30.822 "state": "completed", 00:21:30.822 "digest": "sha512", 00:21:30.822 "dhgroup": "ffdhe3072" 00:21:30.822 } 00:21:30.822 } 00:21:30.822 ]' 00:21:30.822 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.822 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.822 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.822 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:30.822 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.822 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.822 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.822 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.079 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:21:31.080 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:21:32.014 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.014 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.014 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.014 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.014 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.014 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.014 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.014 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:32.014 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:32.271 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:32.271 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.271 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.271 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:32.271 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:32.271 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.271 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.271 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.271 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.271 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.271 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.271 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.271 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.528 00:21:32.529 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.529 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.529 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.095 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.095 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.095 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.095 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.095 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.095 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.095 { 00:21:33.095 "cntlid": 121, 00:21:33.095 "qid": 0, 00:21:33.095 "state": "enabled", 00:21:33.095 "thread": "nvmf_tgt_poll_group_000", 00:21:33.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:33.095 "listen_address": { 00:21:33.095 "trtype": "TCP", 00:21:33.095 "adrfam": "IPv4", 00:21:33.095 "traddr": "10.0.0.2", 00:21:33.095 "trsvcid": "4420" 00:21:33.095 }, 00:21:33.095 "peer_address": { 00:21:33.095 "trtype": "TCP", 00:21:33.095 "adrfam": "IPv4", 00:21:33.095 "traddr": "10.0.0.1", 00:21:33.095 "trsvcid": "54536" 00:21:33.095 }, 00:21:33.095 "auth": { 00:21:33.095 "state": "completed", 00:21:33.095 "digest": "sha512", 00:21:33.095 "dhgroup": "ffdhe4096" 00:21:33.095 } 00:21:33.095 } 00:21:33.095 ]' 00:21:33.095 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.095 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.095 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.095 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:33.095 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.095 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.095 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.095 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.352 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:21:33.352 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:21:34.286 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.286 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.286 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.286 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.286 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.286 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.286 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:34.286 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:34.544 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:34.544 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.544 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.544 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:34.544 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:34.544 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.544 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.544 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.544 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.544 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.544 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.544 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.544 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.801 00:21:35.059 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.059 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.059 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.316 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.316 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.316 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.316 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.316 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.317 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.317 { 00:21:35.317 "cntlid": 123, 00:21:35.317 "qid": 0, 00:21:35.317 "state": "enabled", 00:21:35.317 "thread": "nvmf_tgt_poll_group_000", 00:21:35.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:35.317 "listen_address": { 00:21:35.317 "trtype": "TCP", 00:21:35.317 "adrfam": "IPv4", 00:21:35.317 "traddr": "10.0.0.2", 00:21:35.317 "trsvcid": "4420" 00:21:35.317 }, 00:21:35.317 "peer_address": { 00:21:35.317 "trtype": "TCP", 00:21:35.317 "adrfam": "IPv4", 00:21:35.317 "traddr": "10.0.0.1", 00:21:35.317 "trsvcid": "54576" 00:21:35.317 }, 00:21:35.317 "auth": { 00:21:35.317 "state": "completed", 00:21:35.317 "digest": "sha512", 00:21:35.317 "dhgroup": "ffdhe4096" 00:21:35.317 } 00:21:35.317 } 00:21:35.317 ]' 00:21:35.317 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.317 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.317 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.317 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:35.317 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.317 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.317 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.317 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.574 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:21:35.574 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:21:36.518 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.518 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.518 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.518 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.518 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.518 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.518 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:36.518 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:36.777 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:36.777 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.777 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.777 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:36.777 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:36.777 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.777 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.777 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.777 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.777 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.777 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.777 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.777 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.343 00:21:37.343 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.343 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.343 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.601 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.601 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.601 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.601 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.601 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.601 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.601 { 00:21:37.601 "cntlid": 125, 00:21:37.601 "qid": 0, 00:21:37.601 "state": "enabled", 00:21:37.601 "thread": "nvmf_tgt_poll_group_000", 00:21:37.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:37.601 "listen_address": { 00:21:37.601 "trtype": "TCP", 00:21:37.601 "adrfam": "IPv4", 00:21:37.601 "traddr": "10.0.0.2", 00:21:37.601 "trsvcid": "4420" 00:21:37.601 }, 00:21:37.601 "peer_address": { 00:21:37.601 "trtype": "TCP", 00:21:37.601 "adrfam": "IPv4", 00:21:37.601 "traddr": "10.0.0.1", 00:21:37.601 "trsvcid": "45674" 00:21:37.601 }, 00:21:37.601 "auth": { 00:21:37.601 "state": "completed", 00:21:37.601 "digest": "sha512", 00:21:37.601 "dhgroup": "ffdhe4096" 00:21:37.601 } 00:21:37.601 } 00:21:37.601 ]' 00:21:37.601 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.601 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.601 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.601 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:37.601 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.601 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.601 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.601 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.859 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:21:37.859 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:21:38.792 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.792 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.792 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.792 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.792 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.792 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.792 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:38.792 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:39.050 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:39.050 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.050 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.050 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:39.050 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:39.050 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.050 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:39.050 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.050 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.050 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.050 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:39.050 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.050 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.308 00:21:39.308 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.308 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.308 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.566 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.566 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.566 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.566 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.824 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.824 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.824 { 00:21:39.824 "cntlid": 127, 00:21:39.824 "qid": 0, 00:21:39.824 "state": "enabled", 00:21:39.824 "thread": "nvmf_tgt_poll_group_000", 00:21:39.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:39.824 "listen_address": { 00:21:39.824 "trtype": "TCP", 00:21:39.824 "adrfam": "IPv4", 00:21:39.824 "traddr": "10.0.0.2", 00:21:39.824 "trsvcid": "4420" 00:21:39.824 }, 00:21:39.824 "peer_address": { 00:21:39.824 "trtype": "TCP", 00:21:39.824 "adrfam": "IPv4", 00:21:39.824 "traddr": "10.0.0.1", 00:21:39.824 "trsvcid": "45708" 00:21:39.824 }, 00:21:39.824 "auth": { 00:21:39.824 "state": "completed", 00:21:39.824 "digest": "sha512", 00:21:39.824 "dhgroup": "ffdhe4096" 00:21:39.824 } 00:21:39.824 } 00:21:39.824 ]' 00:21:39.824 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.824 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.824 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.824 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:39.824 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.824 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.824 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.824 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.082 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:21:40.082 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:21:41.017 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.017 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.017 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.017 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.017 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.017 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.017 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.017 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:41.017 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:41.275 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:41.275 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.275 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.275 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:41.275 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:41.275 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.275 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.275 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.275 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.275 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.275 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.275 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.275 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.841 00:21:41.841 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.841 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.841 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.098 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.098 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.098 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.098 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.098 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.098 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.098 { 00:21:42.098 "cntlid": 129, 00:21:42.098 "qid": 0, 00:21:42.099 "state": "enabled", 00:21:42.099 "thread": "nvmf_tgt_poll_group_000", 00:21:42.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:42.099 "listen_address": { 00:21:42.099 "trtype": "TCP", 00:21:42.099 "adrfam": "IPv4", 00:21:42.099 "traddr": "10.0.0.2", 00:21:42.099 "trsvcid": "4420" 00:21:42.099 }, 00:21:42.099 "peer_address": { 00:21:42.099 "trtype": "TCP", 00:21:42.099 "adrfam": "IPv4", 00:21:42.099 "traddr": "10.0.0.1", 00:21:42.099 "trsvcid": "45732" 00:21:42.099 }, 00:21:42.099 "auth": { 00:21:42.099 "state": "completed", 00:21:42.099 "digest": "sha512", 00:21:42.099 "dhgroup": "ffdhe6144" 00:21:42.099 } 00:21:42.099 } 00:21:42.099 ]' 00:21:42.099 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.099 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.099 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.099 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:42.099 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.360 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.360 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.360 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.621 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:21:42.622 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:21:43.555 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.555 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.555 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.555 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.555 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.555 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.555 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:43.556 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:43.556 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:43.556 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.556 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.556 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:43.556 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:43.556 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.556 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.556 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.556 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.556 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.556 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.556 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.556 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.120 00:21:44.120 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.120 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.120 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.378 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.379 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.379 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.379 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.379 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.379 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.379 { 00:21:44.379 "cntlid": 131, 00:21:44.379 "qid": 0, 00:21:44.379 "state": "enabled", 00:21:44.379 "thread": "nvmf_tgt_poll_group_000", 00:21:44.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:44.379 "listen_address": { 00:21:44.379 "trtype": "TCP", 00:21:44.379 "adrfam": "IPv4", 00:21:44.379 "traddr": "10.0.0.2", 00:21:44.379 "trsvcid": "4420" 00:21:44.379 }, 00:21:44.379 "peer_address": { 00:21:44.379 "trtype": "TCP", 00:21:44.379 "adrfam": "IPv4", 00:21:44.379 "traddr": "10.0.0.1", 00:21:44.379 "trsvcid": "45758" 00:21:44.379 }, 00:21:44.379 "auth": { 00:21:44.379 "state": "completed", 00:21:44.379 "digest": "sha512", 00:21:44.379 "dhgroup": "ffdhe6144" 00:21:44.379 } 00:21:44.379 } 00:21:44.379 ]' 00:21:44.379 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.379 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.637 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.637 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:44.637 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.637 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.637 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.637 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.895 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:21:44.895 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:21:45.828 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.828 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.828 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.828 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.828 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.828 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.828 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:45.828 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.086 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:46.086 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.086 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.086 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:46.086 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:46.086 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.086 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.086 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.086 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.086 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.086 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.086 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.086 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.652 00:21:46.652 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.652 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.652 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.910 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.910 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.910 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.910 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.910 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.910 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.910 { 00:21:46.910 "cntlid": 133, 00:21:46.910 "qid": 0, 00:21:46.910 "state": "enabled", 00:21:46.910 "thread": "nvmf_tgt_poll_group_000", 00:21:46.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:46.910 "listen_address": { 00:21:46.910 "trtype": "TCP", 00:21:46.910 "adrfam": "IPv4", 00:21:46.910 "traddr": "10.0.0.2", 00:21:46.910 "trsvcid": "4420" 00:21:46.910 }, 00:21:46.910 "peer_address": { 00:21:46.910 "trtype": "TCP", 00:21:46.910 "adrfam": "IPv4", 00:21:46.910 "traddr": "10.0.0.1", 00:21:46.910 "trsvcid": "45774" 00:21:46.910 }, 00:21:46.910 "auth": { 00:21:46.910 "state": "completed", 00:21:46.910 "digest": "sha512", 00:21:46.910 "dhgroup": "ffdhe6144" 00:21:46.910 } 00:21:46.910 } 00:21:46.910 ]' 00:21:46.910 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.168 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.168 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.168 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:47.168 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.168 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.168 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.168 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.427 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:21:47.427 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:21:48.362 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.362 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.362 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.362 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.362 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.362 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.362 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:48.362 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:48.685 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:48.685 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.685 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.685 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:48.685 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:48.685 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.686 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:48.686 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.686 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.686 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.686 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:48.686 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:48.686 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.250 00:21:49.250 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.250 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.250 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.507 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.507 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.507 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.507 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.507 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.507 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.507 { 00:21:49.507 "cntlid": 135, 00:21:49.507 "qid": 0, 00:21:49.507 "state": "enabled", 00:21:49.507 "thread": "nvmf_tgt_poll_group_000", 00:21:49.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:49.507 "listen_address": { 00:21:49.507 "trtype": "TCP", 00:21:49.507 "adrfam": "IPv4", 00:21:49.507 "traddr": "10.0.0.2", 00:21:49.507 "trsvcid": "4420" 00:21:49.507 }, 00:21:49.507 "peer_address": { 00:21:49.507 "trtype": "TCP", 00:21:49.507 "adrfam": "IPv4", 00:21:49.507 "traddr": "10.0.0.1", 00:21:49.507 "trsvcid": "44100" 00:21:49.507 }, 00:21:49.507 "auth": { 00:21:49.507 "state": "completed", 00:21:49.507 "digest": "sha512", 00:21:49.507 "dhgroup": "ffdhe6144" 00:21:49.507 } 00:21:49.507 } 00:21:49.507 ]' 00:21:49.507 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.507 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.507 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.507 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:49.507 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.507 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.507 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.507 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.764 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:21:49.764 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:21:50.697 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.697 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.697 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.697 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.697 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.697 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.697 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.697 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:50.697 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:50.955 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:50.955 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.955 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.955 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:50.955 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:50.955 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.955 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.955 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.955 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.955 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.955 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.955 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.955 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.888 00:21:51.888 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.888 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.888 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.146 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.146 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.146 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.146 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.146 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.146 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.146 { 00:21:52.146 "cntlid": 137, 00:21:52.146 "qid": 0, 00:21:52.146 "state": "enabled", 00:21:52.146 "thread": "nvmf_tgt_poll_group_000", 00:21:52.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:52.146 "listen_address": { 00:21:52.146 "trtype": "TCP", 00:21:52.146 "adrfam": "IPv4", 00:21:52.146 "traddr": "10.0.0.2", 00:21:52.146 "trsvcid": "4420" 00:21:52.146 }, 00:21:52.146 "peer_address": { 00:21:52.146 "trtype": "TCP", 00:21:52.146 "adrfam": "IPv4", 00:21:52.146 "traddr": "10.0.0.1", 00:21:52.146 "trsvcid": "44130" 00:21:52.146 }, 00:21:52.146 "auth": { 00:21:52.146 "state": "completed", 00:21:52.146 "digest": "sha512", 00:21:52.146 "dhgroup": "ffdhe8192" 00:21:52.146 } 00:21:52.146 } 00:21:52.146 ]' 00:21:52.146 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.146 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.146 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.404 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.404 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.404 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.404 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.404 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.661 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:21:52.661 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:21:53.595 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.595 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.595 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.595 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.595 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.595 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.595 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.595 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.853 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:53.853 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.853 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.853 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:53.853 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:53.853 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.853 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.853 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.853 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.853 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.853 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.853 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.853 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.787 00:21:54.787 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.787 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.787 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.787 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.787 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.787 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.787 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.787 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.787 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.787 { 00:21:54.787 "cntlid": 139, 00:21:54.787 "qid": 0, 00:21:54.787 "state": "enabled", 00:21:54.787 "thread": "nvmf_tgt_poll_group_000", 00:21:54.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:54.787 "listen_address": { 00:21:54.787 "trtype": "TCP", 00:21:54.787 "adrfam": "IPv4", 00:21:54.787 "traddr": "10.0.0.2", 00:21:54.787 "trsvcid": "4420" 00:21:54.787 }, 00:21:54.787 "peer_address": { 00:21:54.787 "trtype": "TCP", 00:21:54.787 "adrfam": "IPv4", 00:21:54.787 "traddr": "10.0.0.1", 00:21:54.787 "trsvcid": "44154" 00:21:54.787 }, 00:21:54.787 "auth": { 00:21:54.787 "state": "completed", 00:21:54.787 "digest": "sha512", 00:21:54.787 "dhgroup": "ffdhe8192" 00:21:54.787 } 00:21:54.787 } 00:21:54.787 ]' 00:21:54.787 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.045 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.045 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.045 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:55.045 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.045 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.045 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.045 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.303 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:21:55.303 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: --dhchap-ctrl-secret DHHC-1:02:ZmM1ZDRlNWY2NGUzMDBmNGMwY2ViNTU2YjBjYjE3Y2Y4YzMwZDdhYTBkYzhhZGY1sg8zcg==: 00:21:56.236 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.236 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.236 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.236 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.236 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.236 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.236 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.236 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.498 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:56.498 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.498 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.498 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:56.498 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:56.498 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.498 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.498 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.498 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.498 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.498 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.498 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.498 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.431 00:21:57.431 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.431 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.431 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.690 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.690 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.690 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.690 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.690 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.690 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.690 { 00:21:57.690 "cntlid": 141, 00:21:57.690 "qid": 0, 00:21:57.690 "state": "enabled", 00:21:57.690 "thread": "nvmf_tgt_poll_group_000", 00:21:57.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:57.690 "listen_address": { 00:21:57.690 "trtype": "TCP", 00:21:57.690 "adrfam": "IPv4", 00:21:57.690 "traddr": "10.0.0.2", 00:21:57.690 "trsvcid": "4420" 00:21:57.690 }, 00:21:57.690 "peer_address": { 00:21:57.690 "trtype": "TCP", 00:21:57.690 "adrfam": "IPv4", 00:21:57.690 "traddr": "10.0.0.1", 00:21:57.690 "trsvcid": "38318" 00:21:57.690 }, 00:21:57.690 "auth": { 00:21:57.690 "state": "completed", 00:21:57.690 "digest": "sha512", 00:21:57.690 "dhgroup": "ffdhe8192" 00:21:57.690 } 00:21:57.690 } 00:21:57.690 ]' 00:21:57.690 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.690 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.690 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.690 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:57.690 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.690 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.690 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.690 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.948 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:21:57.948 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:01:ZTQ1NzY2YTVmYjQ2ZGJiYjMzMDgxZTE0ZGE2MzVlM2STz2er: 00:21:58.879 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.880 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.880 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.880 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.880 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.880 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.880 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:58.880 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.137 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:59.137 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.137 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.137 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:59.137 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:59.137 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.137 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:59.137 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.137 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.137 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.137 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:59.137 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:59.137 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.070 00:22:00.070 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.070 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.070 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.327 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.327 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.327 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.327 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.327 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.328 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.328 { 00:22:00.328 "cntlid": 143, 00:22:00.328 "qid": 0, 00:22:00.328 "state": "enabled", 00:22:00.328 "thread": "nvmf_tgt_poll_group_000", 00:22:00.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:00.328 "listen_address": { 00:22:00.328 "trtype": "TCP", 00:22:00.328 "adrfam": "IPv4", 00:22:00.328 "traddr": "10.0.0.2", 00:22:00.328 "trsvcid": "4420" 00:22:00.328 }, 00:22:00.328 "peer_address": { 00:22:00.328 "trtype": "TCP", 00:22:00.328 "adrfam": "IPv4", 00:22:00.328 "traddr": "10.0.0.1", 00:22:00.328 "trsvcid": "38348" 00:22:00.328 }, 00:22:00.328 "auth": { 00:22:00.328 "state": "completed", 00:22:00.328 "digest": "sha512", 00:22:00.328 "dhgroup": "ffdhe8192" 00:22:00.328 } 00:22:00.328 } 00:22:00.328 ]' 00:22:00.328 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.328 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.328 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.328 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.328 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.586 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.586 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.586 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.844 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:22:00.844 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:22:01.778 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.778 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.778 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.778 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.778 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.778 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:01.778 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:01.778 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:01.778 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:01.778 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:01.778 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.036 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:02.036 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.036 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.036 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:02.036 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:02.036 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.036 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.036 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.036 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.036 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.036 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.036 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.036 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.969 00:22:02.969 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.969 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.969 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.227 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.228 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.228 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.228 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.228 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.228 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.228 { 00:22:03.228 "cntlid": 145, 00:22:03.228 "qid": 0, 00:22:03.228 "state": "enabled", 00:22:03.228 "thread": "nvmf_tgt_poll_group_000", 00:22:03.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:03.228 "listen_address": { 00:22:03.228 "trtype": "TCP", 00:22:03.228 "adrfam": "IPv4", 00:22:03.228 "traddr": "10.0.0.2", 00:22:03.228 "trsvcid": "4420" 00:22:03.228 }, 00:22:03.228 "peer_address": { 00:22:03.228 "trtype": "TCP", 00:22:03.228 "adrfam": "IPv4", 00:22:03.228 "traddr": "10.0.0.1", 00:22:03.228 "trsvcid": "38372" 00:22:03.228 }, 00:22:03.228 "auth": { 00:22:03.228 "state": "completed", 00:22:03.228 "digest": "sha512", 00:22:03.228 "dhgroup": "ffdhe8192" 00:22:03.228 } 00:22:03.228 } 00:22:03.228 ]' 00:22:03.228 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.228 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.228 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.228 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:03.228 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.228 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.228 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.228 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.486 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:22:03.486 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWRkMzY3NjE0OTVlNjAxZDU1ZjdmOTJhMTUyM2IxYjA2MzY5M2E4OTJmMGYyZWY1ZkzyCA==: --dhchap-ctrl-secret DHHC-1:03:OWNhOWEyOTE5ODhiMzIzYjViZDIzMmI5NDViYWVkNGE1MjlhOWM5MTBhN2UwNzc0ZjAwMThhMzJlMjZmNWRiM2EMFdk=: 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:04.418 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:05.350 request: 00:22:05.350 { 00:22:05.350 "name": "nvme0", 00:22:05.350 "trtype": "tcp", 00:22:05.350 "traddr": "10.0.0.2", 00:22:05.350 "adrfam": "ipv4", 00:22:05.350 "trsvcid": "4420", 00:22:05.350 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:05.350 "prchk_reftag": false, 00:22:05.350 "prchk_guard": false, 00:22:05.350 "hdgst": false, 00:22:05.350 "ddgst": false, 00:22:05.350 "dhchap_key": "key2", 00:22:05.350 "allow_unrecognized_csi": false, 00:22:05.350 "method": "bdev_nvme_attach_controller", 00:22:05.350 "req_id": 1 00:22:05.350 } 00:22:05.350 Got JSON-RPC error response 00:22:05.350 response: 00:22:05.350 { 00:22:05.350 "code": -5, 00:22:05.350 "message": "Input/output error" 00:22:05.350 } 00:22:05.350 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:05.350 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:05.350 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:05.350 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:05.350 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.350 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.350 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.350 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.350 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.350 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.350 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.350 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.351 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.351 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:05.351 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.351 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:05.351 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.351 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:05.351 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.351 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.351 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.351 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:06.284 request: 00:22:06.284 { 00:22:06.284 "name": "nvme0", 00:22:06.284 "trtype": "tcp", 00:22:06.284 "traddr": "10.0.0.2", 00:22:06.284 "adrfam": "ipv4", 00:22:06.284 "trsvcid": "4420", 00:22:06.284 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.284 "prchk_reftag": false, 00:22:06.284 "prchk_guard": false, 00:22:06.284 "hdgst": false, 00:22:06.284 "ddgst": false, 00:22:06.284 "dhchap_key": "key1", 00:22:06.284 "dhchap_ctrlr_key": "ckey2", 00:22:06.284 "allow_unrecognized_csi": false, 00:22:06.284 "method": "bdev_nvme_attach_controller", 00:22:06.285 "req_id": 1 00:22:06.285 } 00:22:06.285 Got JSON-RPC error response 00:22:06.285 response: 00:22:06.285 { 00:22:06.285 "code": -5, 00:22:06.285 "message": "Input/output error" 00:22:06.285 } 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.285 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.851 request: 00:22:06.851 { 00:22:06.851 "name": "nvme0", 00:22:06.851 "trtype": "tcp", 00:22:06.851 "traddr": "10.0.0.2", 00:22:06.851 "adrfam": "ipv4", 00:22:06.851 "trsvcid": "4420", 00:22:06.851 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.851 "prchk_reftag": false, 00:22:06.851 "prchk_guard": false, 00:22:06.851 "hdgst": false, 00:22:06.851 "ddgst": false, 00:22:06.851 "dhchap_key": "key1", 00:22:06.851 "dhchap_ctrlr_key": "ckey1", 00:22:06.851 "allow_unrecognized_csi": false, 00:22:06.851 "method": "bdev_nvme_attach_controller", 00:22:06.851 "req_id": 1 00:22:06.851 } 00:22:06.851 Got JSON-RPC error response 00:22:06.851 response: 00:22:06.851 { 00:22:06.851 "code": -5, 00:22:06.851 "message": "Input/output error" 00:22:06.851 } 00:22:06.851 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:06.851 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:06.851 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:06.851 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:06.851 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.851 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.851 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.851 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.851 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 241275 00:22:06.851 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 241275 ']' 00:22:06.851 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 241275 00:22:06.851 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:06.851 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.851 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 241275 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 241275' 00:22:07.132 killing process with pid 241275 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 241275 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 241275 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=264337 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 264337 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 264337 ']' 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.132 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.405 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.405 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:07.405 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:07.405 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:07.406 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.406 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.406 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:07.406 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 264337 00:22:07.406 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 264337 ']' 00:22:07.406 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.406 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.406 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.406 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.406 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.699 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.699 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:07.699 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:07.699 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.699 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.986 null0 00:22:07.986 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.986 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:07.986 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gUO 00:22:07.986 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.986 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.986 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.986 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Mnb ]] 00:22:07.986 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Mnb 00:22:07.986 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.986 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.986 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.4dD 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.4nD ]] 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4nD 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Ldh 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.6cm ]] 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6cm 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.puC 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.987 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.405 nvme0n1 00:22:09.405 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.405 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.405 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.663 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.663 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.663 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.663 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.663 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.663 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.663 { 00:22:09.663 "cntlid": 1, 00:22:09.663 "qid": 0, 00:22:09.663 "state": "enabled", 00:22:09.663 "thread": "nvmf_tgt_poll_group_000", 00:22:09.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:09.663 "listen_address": { 00:22:09.663 "trtype": "TCP", 00:22:09.663 "adrfam": "IPv4", 00:22:09.663 "traddr": "10.0.0.2", 00:22:09.663 "trsvcid": "4420" 00:22:09.663 }, 00:22:09.663 "peer_address": { 00:22:09.663 "trtype": "TCP", 00:22:09.663 "adrfam": "IPv4", 00:22:09.663 "traddr": "10.0.0.1", 00:22:09.663 "trsvcid": "35258" 00:22:09.663 }, 00:22:09.663 "auth": { 00:22:09.663 "state": "completed", 00:22:09.664 "digest": "sha512", 00:22:09.664 "dhgroup": "ffdhe8192" 00:22:09.664 } 00:22:09.664 } 00:22:09.664 ]' 00:22:09.664 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.664 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.664 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.664 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:09.664 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.664 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.664 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.664 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.922 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:22:09.922 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:22:10.856 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.856 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.856 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.856 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.856 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.856 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:10.856 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.856 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.856 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.856 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:10.856 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:11.116 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:11.116 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:11.116 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:11.116 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:11.116 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.116 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:11.116 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.116 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:11.116 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:11.116 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:11.372 request: 00:22:11.372 { 00:22:11.372 "name": "nvme0", 00:22:11.372 "trtype": "tcp", 00:22:11.372 "traddr": "10.0.0.2", 00:22:11.372 "adrfam": "ipv4", 00:22:11.372 "trsvcid": "4420", 00:22:11.372 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:11.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:11.372 "prchk_reftag": false, 00:22:11.372 "prchk_guard": false, 00:22:11.372 "hdgst": false, 00:22:11.372 "ddgst": false, 00:22:11.372 "dhchap_key": "key3", 00:22:11.372 "allow_unrecognized_csi": false, 00:22:11.372 "method": "bdev_nvme_attach_controller", 00:22:11.372 "req_id": 1 00:22:11.372 } 00:22:11.372 Got JSON-RPC error response 00:22:11.372 response: 00:22:11.372 { 00:22:11.372 "code": -5, 00:22:11.372 "message": "Input/output error" 00:22:11.372 } 00:22:11.372 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:11.372 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:11.373 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:11.373 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:11.373 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:11.373 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:11.373 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:11.373 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:11.630 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:11.630 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:11.630 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:11.630 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:11.630 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.630 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:11.630 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.630 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:11.630 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:11.630 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:11.887 request: 00:22:11.887 { 00:22:11.887 "name": "nvme0", 00:22:11.887 "trtype": "tcp", 00:22:11.887 "traddr": "10.0.0.2", 00:22:11.887 "adrfam": "ipv4", 00:22:11.887 "trsvcid": "4420", 00:22:11.887 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:11.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:11.887 "prchk_reftag": false, 00:22:11.887 "prchk_guard": false, 00:22:11.887 "hdgst": false, 00:22:11.887 "ddgst": false, 00:22:11.887 "dhchap_key": "key3", 00:22:11.887 "allow_unrecognized_csi": false, 00:22:11.887 "method": "bdev_nvme_attach_controller", 00:22:11.887 "req_id": 1 00:22:11.887 } 00:22:11.887 Got JSON-RPC error response 00:22:11.887 response: 00:22:11.887 { 00:22:11.887 "code": -5, 00:22:11.887 "message": "Input/output error" 00:22:11.887 } 00:22:11.887 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:12.146 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.146 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:12.146 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.146 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:12.146 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:12.146 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:12.146 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:12.146 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:12.146 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:12.404 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:12.970 request: 00:22:12.970 { 00:22:12.970 "name": "nvme0", 00:22:12.970 "trtype": "tcp", 00:22:12.970 "traddr": "10.0.0.2", 00:22:12.970 "adrfam": "ipv4", 00:22:12.970 "trsvcid": "4420", 00:22:12.970 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:12.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:12.970 "prchk_reftag": false, 00:22:12.970 "prchk_guard": false, 00:22:12.970 "hdgst": false, 00:22:12.970 "ddgst": false, 00:22:12.970 "dhchap_key": "key0", 00:22:12.970 "dhchap_ctrlr_key": "key1", 00:22:12.970 "allow_unrecognized_csi": false, 00:22:12.970 "method": "bdev_nvme_attach_controller", 00:22:12.970 "req_id": 1 00:22:12.970 } 00:22:12.970 Got JSON-RPC error response 00:22:12.970 response: 00:22:12.970 { 00:22:12.970 "code": -5, 00:22:12.970 "message": "Input/output error" 00:22:12.970 } 00:22:12.970 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:12.970 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.970 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:12.970 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.970 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:12.970 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:12.970 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:13.228 nvme0n1 00:22:13.228 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:13.228 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:13.228 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.486 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.486 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.486 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.744 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:13.744 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.744 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.744 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.744 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:13.744 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:13.744 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:15.121 nvme0n1 00:22:15.121 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:15.121 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:15.121 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.379 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.379 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:15.379 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.379 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.379 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.379 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:15.379 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:15.379 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.638 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.638 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:22:15.638 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: --dhchap-ctrl-secret DHHC-1:03:ZjNmYzRhZjNmM2QxZmQ3M2YyZmNlMmE3YTMzOTVlZTUwYzY3ZGU3NTA4NGM0MmEzNThkNzY3NTc5N2FjMGU3OXHU5uE=: 00:22:16.573 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:16.573 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:16.573 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:16.573 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:16.573 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:16.573 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:16.573 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:16.573 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.573 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.833 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:16.833 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:16.833 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:16.833 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:16.833 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:16.833 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:16.833 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:16.833 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:16.833 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:16.833 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:17.773 request: 00:22:17.773 { 00:22:17.773 "name": "nvme0", 00:22:17.773 "trtype": "tcp", 00:22:17.773 "traddr": "10.0.0.2", 00:22:17.773 "adrfam": "ipv4", 00:22:17.773 "trsvcid": "4420", 00:22:17.773 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:17.773 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:17.773 "prchk_reftag": false, 00:22:17.773 "prchk_guard": false, 00:22:17.773 "hdgst": false, 00:22:17.773 "ddgst": false, 00:22:17.773 "dhchap_key": "key1", 00:22:17.773 "allow_unrecognized_csi": false, 00:22:17.773 "method": "bdev_nvme_attach_controller", 00:22:17.773 "req_id": 1 00:22:17.773 } 00:22:17.773 Got JSON-RPC error response 00:22:17.773 response: 00:22:17.773 { 00:22:17.773 "code": -5, 00:22:17.773 "message": "Input/output error" 00:22:17.773 } 00:22:17.773 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:17.773 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:17.773 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:17.773 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:17.773 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:17.773 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:17.773 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:19.152 nvme0n1 00:22:19.152 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:19.152 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:19.152 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.410 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.410 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.410 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.668 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.668 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.668 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.668 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.668 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:19.668 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:19.668 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:19.926 nvme0n1 00:22:19.926 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:19.926 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:19.926 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.185 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.185 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.185 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.750 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:20.750 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.750 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.750 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.750 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: '' 2s 00:22:20.750 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:20.750 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:20.750 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: 00:22:20.750 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:20.750 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:20.750 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:20.750 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: ]] 00:22:20.750 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ODAzNTljYmJiMWI3NTQ3NGVkZTk4NDBiN2Y5YjI5M2TQ4pqi: 00:22:20.750 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:20.750 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:20.750 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: 2s 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: ]] 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2EzYmQyYzYxNTE0NmZlNTEyYjQ3ZjViMTMxNDE5ZjMzMWE3MTFjNjMzNWI0NDdmYJnh7w==: 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:22.648 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:25.175 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:25.175 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:25.175 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:25.175 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:25.175 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:25.175 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:25.175 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:25.175 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.175 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:25.175 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.175 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.175 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.175 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:25.175 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:25.175 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:26.109 nvme0n1 00:22:26.109 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:26.109 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.109 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.109 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.109 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:26.109 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:27.042 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:27.042 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:27.042 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.301 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.301 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.301 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.301 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.301 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.301 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:27.301 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:27.559 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:27.559 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:27.559 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.817 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.817 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:27.817 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.817 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.817 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.817 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:27.817 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:27.817 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:27.817 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:27.817 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:27.817 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:27.817 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:27.817 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:27.817 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:28.753 request: 00:22:28.753 { 00:22:28.753 "name": "nvme0", 00:22:28.753 "dhchap_key": "key1", 00:22:28.753 "dhchap_ctrlr_key": "key3", 00:22:28.753 "method": "bdev_nvme_set_keys", 00:22:28.753 "req_id": 1 00:22:28.753 } 00:22:28.753 Got JSON-RPC error response 00:22:28.753 response: 00:22:28.753 { 00:22:28.753 "code": -13, 00:22:28.753 "message": "Permission denied" 00:22:28.753 } 00:22:28.753 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:28.753 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:28.753 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:28.753 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:28.753 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:28.753 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:28.753 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.753 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:28.753 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:30.127 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:30.127 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:30.127 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.127 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:30.127 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:30.127 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.127 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.127 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.127 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:30.127 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:30.127 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:31.501 nvme0n1 00:22:31.501 03:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:31.501 03:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.501 03:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.501 03:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.501 03:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:31.501 03:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:31.501 03:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:31.501 03:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:31.501 03:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:31.501 03:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:31.501 03:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:31.501 03:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:31.501 03:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:32.434 request: 00:22:32.434 { 00:22:32.434 "name": "nvme0", 00:22:32.434 "dhchap_key": "key2", 00:22:32.434 "dhchap_ctrlr_key": "key0", 00:22:32.434 "method": "bdev_nvme_set_keys", 00:22:32.434 "req_id": 1 00:22:32.434 } 00:22:32.434 Got JSON-RPC error response 00:22:32.434 response: 00:22:32.434 { 00:22:32.434 "code": -13, 00:22:32.434 "message": "Permission denied" 00:22:32.434 } 00:22:32.434 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:32.434 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:32.434 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:32.434 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:32.434 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:32.434 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:32.434 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.693 03:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:32.693 03:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:33.628 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:33.628 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:33.628 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.886 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:33.886 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:33.886 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:33.886 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 241301 00:22:33.886 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 241301 ']' 00:22:33.886 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 241301 00:22:33.886 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:33.886 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.886 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 241301 00:22:33.886 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:33.886 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:33.886 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 241301' 00:22:33.886 killing process with pid 241301 00:22:33.886 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 241301 00:22:33.886 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 241301 00:22:34.451 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:34.451 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:34.452 rmmod nvme_tcp 00:22:34.452 rmmod nvme_fabrics 00:22:34.452 rmmod nvme_keyring 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 264337 ']' 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 264337 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 264337 ']' 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 264337 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 264337 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 264337' 00:22:34.452 killing process with pid 264337 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 264337 00:22:34.452 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 264337 00:22:34.711 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:34.711 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:34.711 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:34.711 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:34.711 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:34.711 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:34.711 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:34.711 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.711 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.711 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.711 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.711 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.619 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:36.619 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.gUO /tmp/spdk.key-sha256.4dD /tmp/spdk.key-sha384.Ldh /tmp/spdk.key-sha512.puC /tmp/spdk.key-sha512.Mnb /tmp/spdk.key-sha384.4nD /tmp/spdk.key-sha256.6cm '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:36.619 00:22:36.619 real 3m33.666s 00:22:36.619 user 8m19.814s 00:22:36.619 sys 0m28.275s 00:22:36.619 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.619 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.619 ************************************ 00:22:36.619 END TEST nvmf_auth_target 00:22:36.619 ************************************ 00:22:36.619 03:03:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:36.619 03:03:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:36.619 03:03:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:36.619 03:03:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.619 03:03:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:36.880 ************************************ 00:22:36.880 START TEST nvmf_bdevio_no_huge 00:22:36.880 ************************************ 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:36.880 * Looking for test storage... 00:22:36.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:36.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.880 --rc genhtml_branch_coverage=1 00:22:36.880 --rc genhtml_function_coverage=1 00:22:36.880 --rc genhtml_legend=1 00:22:36.880 --rc geninfo_all_blocks=1 00:22:36.880 --rc geninfo_unexecuted_blocks=1 00:22:36.880 00:22:36.880 ' 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:36.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.880 --rc genhtml_branch_coverage=1 00:22:36.880 --rc genhtml_function_coverage=1 00:22:36.880 --rc genhtml_legend=1 00:22:36.880 --rc geninfo_all_blocks=1 00:22:36.880 --rc geninfo_unexecuted_blocks=1 00:22:36.880 00:22:36.880 ' 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:36.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.880 --rc genhtml_branch_coverage=1 00:22:36.880 --rc genhtml_function_coverage=1 00:22:36.880 --rc genhtml_legend=1 00:22:36.880 --rc geninfo_all_blocks=1 00:22:36.880 --rc geninfo_unexecuted_blocks=1 00:22:36.880 00:22:36.880 ' 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:36.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.880 --rc genhtml_branch_coverage=1 00:22:36.880 --rc genhtml_function_coverage=1 00:22:36.880 --rc genhtml_legend=1 00:22:36.880 --rc geninfo_all_blocks=1 00:22:36.880 --rc geninfo_unexecuted_blocks=1 00:22:36.880 00:22:36.880 ' 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:36.880 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:36.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:36.881 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:39.421 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:39.421 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:39.421 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.421 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:39.422 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:39.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:22:39.422 00:22:39.422 --- 10.0.0.2 ping statistics --- 00:22:39.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.422 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:22:39.422 00:22:39.422 --- 10.0.0.1 ping statistics --- 00:22:39.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.422 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=269596 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 269596 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 269596 ']' 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.422 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.422 [2024-11-19 03:03:49.825642] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:22:39.422 [2024-11-19 03:03:49.825747] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:39.422 [2024-11-19 03:03:49.901028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.422 [2024-11-19 03:03:49.947894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.422 [2024-11-19 03:03:49.947951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.422 [2024-11-19 03:03:49.947990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.422 [2024-11-19 03:03:49.948002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.422 [2024-11-19 03:03:49.948011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.422 [2024-11-19 03:03:49.949055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:39.422 [2024-11-19 03:03:49.949118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:39.422 [2024-11-19 03:03:49.949177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:39.422 [2024-11-19 03:03:49.949177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.681 [2024-11-19 03:03:50.107923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.681 Malloc0 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.681 [2024-11-19 03:03:50.146544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:39.681 { 00:22:39.681 "params": { 00:22:39.681 "name": "Nvme$subsystem", 00:22:39.681 "trtype": "$TEST_TRANSPORT", 00:22:39.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.681 "adrfam": "ipv4", 00:22:39.681 "trsvcid": "$NVMF_PORT", 00:22:39.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.681 "hdgst": ${hdgst:-false}, 00:22:39.681 "ddgst": ${ddgst:-false} 00:22:39.681 }, 00:22:39.681 "method": "bdev_nvme_attach_controller" 00:22:39.681 } 00:22:39.681 EOF 00:22:39.681 )") 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:39.681 03:03:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:39.681 "params": { 00:22:39.681 "name": "Nvme1", 00:22:39.681 "trtype": "tcp", 00:22:39.681 "traddr": "10.0.0.2", 00:22:39.681 "adrfam": "ipv4", 00:22:39.681 "trsvcid": "4420", 00:22:39.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:39.681 "hdgst": false, 00:22:39.681 "ddgst": false 00:22:39.681 }, 00:22:39.681 "method": "bdev_nvme_attach_controller" 00:22:39.681 }' 00:22:39.681 [2024-11-19 03:03:50.197143] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:22:39.681 [2024-11-19 03:03:50.197210] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid269631 ] 00:22:39.681 [2024-11-19 03:03:50.266069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:39.940 [2024-11-19 03:03:50.317356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.940 [2024-11-19 03:03:50.317408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.940 [2024-11-19 03:03:50.317411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.198 I/O targets: 00:22:40.198 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:40.198 00:22:40.198 00:22:40.198 CUnit - A unit testing framework for C - Version 2.1-3 00:22:40.198 http://cunit.sourceforge.net/ 00:22:40.198 00:22:40.198 00:22:40.198 Suite: bdevio tests on: Nvme1n1 00:22:40.198 Test: blockdev write read block ...passed 00:22:40.198 Test: blockdev write zeroes read block ...passed 00:22:40.198 Test: blockdev write zeroes read no split ...passed 00:22:40.198 Test: blockdev write zeroes read split ...passed 00:22:40.198 Test: blockdev write zeroes read split partial ...passed 00:22:40.198 Test: blockdev reset ...[2024-11-19 03:03:50.780783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:40.198 [2024-11-19 03:03:50.780906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b46a0 (9): Bad file descriptor 00:22:40.457 [2024-11-19 03:03:50.930849] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:40.457 passed 00:22:40.457 Test: blockdev write read 8 blocks ...passed 00:22:40.457 Test: blockdev write read size > 128k ...passed 00:22:40.457 Test: blockdev write read invalid size ...passed 00:22:40.457 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:40.457 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:40.457 Test: blockdev write read max offset ...passed 00:22:40.715 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:40.715 Test: blockdev writev readv 8 blocks ...passed 00:22:40.715 Test: blockdev writev readv 30 x 1block ...passed 00:22:40.715 Test: blockdev writev readv block ...passed 00:22:40.715 Test: blockdev writev readv size > 128k ...passed 00:22:40.715 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:40.715 Test: blockdev comparev and writev ...[2024-11-19 03:03:51.141718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.715 [2024-11-19 03:03:51.141756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.715 [2024-11-19 03:03:51.141782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.716 [2024-11-19 03:03:51.141800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:40.716 [2024-11-19 03:03:51.142116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.716 [2024-11-19 03:03:51.142141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:40.716 [2024-11-19 03:03:51.142163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.716 [2024-11-19 03:03:51.142180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:40.716 [2024-11-19 03:03:51.142480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.716 [2024-11-19 03:03:51.142504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:40.716 [2024-11-19 03:03:51.142526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.716 [2024-11-19 03:03:51.142542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:40.716 [2024-11-19 03:03:51.142876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.716 [2024-11-19 03:03:51.142900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:40.716 [2024-11-19 03:03:51.142922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.716 [2024-11-19 03:03:51.142937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:40.716 passed 00:22:40.716 Test: blockdev nvme passthru rw ...passed 00:22:40.716 Test: blockdev nvme passthru vendor specific ...[2024-11-19 03:03:51.224937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:40.716 [2024-11-19 03:03:51.224964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:40.716 [2024-11-19 03:03:51.225102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:40.716 [2024-11-19 03:03:51.225124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:40.716 [2024-11-19 03:03:51.225259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:40.716 [2024-11-19 03:03:51.225280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:40.716 [2024-11-19 03:03:51.225417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:40.716 [2024-11-19 03:03:51.225439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:40.716 passed 00:22:40.716 Test: blockdev nvme admin passthru ...passed 00:22:40.716 Test: blockdev copy ...passed 00:22:40.716 00:22:40.716 Run Summary: Type Total Ran Passed Failed Inactive 00:22:40.716 suites 1 1 n/a 0 0 00:22:40.716 tests 23 23 23 0 0 00:22:40.716 asserts 152 152 152 0 n/a 00:22:40.716 00:22:40.716 Elapsed time = 1.241 seconds 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:41.283 rmmod nvme_tcp 00:22:41.283 rmmod nvme_fabrics 00:22:41.283 rmmod nvme_keyring 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 269596 ']' 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 269596 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 269596 ']' 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 269596 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 269596 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 269596' 00:22:41.283 killing process with pid 269596 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 269596 00:22:41.283 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 269596 00:22:41.543 03:03:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:41.543 03:03:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:41.543 03:03:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:41.543 03:03:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:41.544 03:03:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:41.544 03:03:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:41.544 03:03:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:41.544 03:03:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.544 03:03:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:41.544 03:03:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.544 03:03:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.544 03:03:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:44.092 00:22:44.092 real 0m6.846s 00:22:44.092 user 0m11.663s 00:22:44.092 sys 0m2.618s 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.092 ************************************ 00:22:44.092 END TEST nvmf_bdevio_no_huge 00:22:44.092 ************************************ 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:44.092 ************************************ 00:22:44.092 START TEST nvmf_tls 00:22:44.092 ************************************ 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:44.092 * Looking for test storage... 00:22:44.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:44.092 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:44.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.093 --rc genhtml_branch_coverage=1 00:22:44.093 --rc genhtml_function_coverage=1 00:22:44.093 --rc genhtml_legend=1 00:22:44.093 --rc geninfo_all_blocks=1 00:22:44.093 --rc geninfo_unexecuted_blocks=1 00:22:44.093 00:22:44.093 ' 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:44.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.093 --rc genhtml_branch_coverage=1 00:22:44.093 --rc genhtml_function_coverage=1 00:22:44.093 --rc genhtml_legend=1 00:22:44.093 --rc geninfo_all_blocks=1 00:22:44.093 --rc geninfo_unexecuted_blocks=1 00:22:44.093 00:22:44.093 ' 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:44.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.093 --rc genhtml_branch_coverage=1 00:22:44.093 --rc genhtml_function_coverage=1 00:22:44.093 --rc genhtml_legend=1 00:22:44.093 --rc geninfo_all_blocks=1 00:22:44.093 --rc geninfo_unexecuted_blocks=1 00:22:44.093 00:22:44.093 ' 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:44.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.093 --rc genhtml_branch_coverage=1 00:22:44.093 --rc genhtml_function_coverage=1 00:22:44.093 --rc genhtml_legend=1 00:22:44.093 --rc geninfo_all_blocks=1 00:22:44.093 --rc geninfo_unexecuted_blocks=1 00:22:44.093 00:22:44.093 ' 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:44.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:44.093 03:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.004 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.004 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:46.004 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:46.004 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:46.004 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:46.005 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:46.005 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:46.005 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:46.005 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:46.005 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:46.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:22:46.005 00:22:46.006 --- 10.0.0.2 ping statistics --- 00:22:46.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.006 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:22:46.006 00:22:46.006 --- 10.0.0.1 ping statistics --- 00:22:46.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.006 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=271824 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 271824 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 271824 ']' 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.006 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.006 [2024-11-19 03:03:56.607528] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:22:46.006 [2024-11-19 03:03:56.607619] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.265 [2024-11-19 03:03:56.682525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.265 [2024-11-19 03:03:56.727882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.265 [2024-11-19 03:03:56.727942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.265 [2024-11-19 03:03:56.727956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.265 [2024-11-19 03:03:56.727967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.265 [2024-11-19 03:03:56.727991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.265 [2024-11-19 03:03:56.728583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.265 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.265 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:46.265 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:46.265 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:46.265 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.265 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.265 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:46.265 03:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:46.831 true 00:22:46.831 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:46.831 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:47.090 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:47.090 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:47.090 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:47.348 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:47.348 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:47.607 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:47.607 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:47.607 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:47.866 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:47.866 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:48.125 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:48.125 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:48.125 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:48.125 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:48.384 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:48.384 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:48.384 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:48.648 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:48.648 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:48.912 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:48.912 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:48.912 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:49.171 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:49.171 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:49.430 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:49.430 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:49.430 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:49.430 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:49.430 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:49.430 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:49.430 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:49.430 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:49.430 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:49.430 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:49.430 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:49.430 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:49.430 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:49.430 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:49.430 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:49.430 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:49.430 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:49.689 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:49.689 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:49.689 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.3hXEtMil6x 00:22:49.689 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:49.689 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.mBE4aLJ8b9 00:22:49.689 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:49.689 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:49.689 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.3hXEtMil6x 00:22:49.689 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.mBE4aLJ8b9 00:22:49.689 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:49.947 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:50.205 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.3hXEtMil6x 00:22:50.205 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3hXEtMil6x 00:22:50.205 03:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:50.464 [2024-11-19 03:04:01.008710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.464 03:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:50.722 03:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:50.980 [2024-11-19 03:04:01.562187] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:50.980 [2024-11-19 03:04:01.562415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.980 03:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:51.238 malloc0 00:22:51.238 03:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:51.803 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3hXEtMil6x 00:22:51.804 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:52.061 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3hXEtMil6x 00:23:04.261 Initializing NVMe Controllers 00:23:04.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:04.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:04.261 Initialization complete. Launching workers. 00:23:04.261 ======================================================== 00:23:04.261 Latency(us) 00:23:04.261 Device Information : IOPS MiB/s Average min max 00:23:04.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8731.89 34.11 7331.47 1145.68 42239.67 00:23:04.261 ======================================================== 00:23:04.261 Total : 8731.89 34.11 7331.47 1145.68 42239.67 00:23:04.261 00:23:04.262 03:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3hXEtMil6x 00:23:04.262 03:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:04.262 03:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:04.262 03:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:04.262 03:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3hXEtMil6x 00:23:04.262 03:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:04.262 03:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=273745 00:23:04.262 03:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:04.262 03:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:04.262 03:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 273745 /var/tmp/bdevperf.sock 00:23:04.262 03:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 273745 ']' 00:23:04.262 03:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.262 03:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:04.262 03:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.262 03:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:04.262 03:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.262 [2024-11-19 03:04:12.810069] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:04.262 [2024-11-19 03:04:12.810146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid273745 ] 00:23:04.262 [2024-11-19 03:04:12.877020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.262 [2024-11-19 03:04:12.922185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.262 03:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.262 03:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:04.262 03:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3hXEtMil6x 00:23:04.262 03:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:04.262 [2024-11-19 03:04:13.552719] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:04.262 TLSTESTn1 00:23:04.262 03:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:04.262 Running I/O for 10 seconds... 00:23:05.198 3013.00 IOPS, 11.77 MiB/s [2024-11-19T02:04:16.747Z] 3166.50 IOPS, 12.37 MiB/s [2024-11-19T02:04:18.122Z] 3206.33 IOPS, 12.52 MiB/s [2024-11-19T02:04:19.058Z] 3256.75 IOPS, 12.72 MiB/s [2024-11-19T02:04:19.993Z] 3291.80 IOPS, 12.86 MiB/s [2024-11-19T02:04:20.930Z] 3313.50 IOPS, 12.94 MiB/s [2024-11-19T02:04:21.865Z] 3320.00 IOPS, 12.97 MiB/s [2024-11-19T02:04:22.801Z] 3332.12 IOPS, 13.02 MiB/s [2024-11-19T02:04:24.176Z] 3340.22 IOPS, 13.05 MiB/s [2024-11-19T02:04:24.176Z] 3344.60 IOPS, 13.06 MiB/s 00:23:13.561 Latency(us) 00:23:13.561 [2024-11-19T02:04:24.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.561 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:13.561 Verification LBA range: start 0x0 length 0x2000 00:23:13.561 TLSTESTn1 : 10.02 3350.90 13.09 0.00 0.00 38138.29 6893.42 49710.27 00:23:13.561 [2024-11-19T02:04:24.176Z] =================================================================================================================== 00:23:13.561 [2024-11-19T02:04:24.176Z] Total : 3350.90 13.09 0.00 0.00 38138.29 6893.42 49710.27 00:23:13.561 { 00:23:13.561 "results": [ 00:23:13.561 { 00:23:13.561 "job": "TLSTESTn1", 00:23:13.561 "core_mask": "0x4", 00:23:13.561 "workload": "verify", 00:23:13.561 "status": "finished", 00:23:13.561 "verify_range": { 00:23:13.561 "start": 0, 00:23:13.561 "length": 8192 00:23:13.561 }, 00:23:13.561 "queue_depth": 128, 00:23:13.561 "io_size": 4096, 00:23:13.561 "runtime": 10.018808, 00:23:13.561 "iops": 3350.8976317342344, 00:23:13.561 "mibps": 13.089443873961853, 00:23:13.561 "io_failed": 0, 00:23:13.561 "io_timeout": 0, 00:23:13.561 "avg_latency_us": 38138.28580607296, 00:23:13.561 "min_latency_us": 6893.416296296296, 00:23:13.561 "max_latency_us": 49710.26962962963 00:23:13.561 } 00:23:13.561 ], 00:23:13.561 "core_count": 1 00:23:13.561 } 00:23:13.561 03:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:13.561 03:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 273745 00:23:13.561 03:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 273745 ']' 00:23:13.561 03:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 273745 00:23:13.561 03:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:13.561 03:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.561 03:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 273745 00:23:13.561 03:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:13.561 03:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:13.561 03:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 273745' 00:23:13.561 killing process with pid 273745 00:23:13.561 03:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 273745 00:23:13.561 Received shutdown signal, test time was about 10.000000 seconds 00:23:13.561 00:23:13.561 Latency(us) 00:23:13.561 [2024-11-19T02:04:24.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.561 [2024-11-19T02:04:24.176Z] =================================================================================================================== 00:23:13.561 [2024-11-19T02:04:24.176Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:13.561 03:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 273745 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mBE4aLJ8b9 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mBE4aLJ8b9 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mBE4aLJ8b9 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.mBE4aLJ8b9 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275077 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275077 /var/tmp/bdevperf.sock 00:23:13.561 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275077 ']' 00:23:13.562 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.562 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.562 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.562 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.562 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.562 [2024-11-19 03:04:24.079457] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:13.562 [2024-11-19 03:04:24.079536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275077 ] 00:23:13.562 [2024-11-19 03:04:24.146829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.820 [2024-11-19 03:04:24.194349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.820 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.820 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:13.820 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mBE4aLJ8b9 00:23:14.078 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:14.338 [2024-11-19 03:04:24.835455] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.338 [2024-11-19 03:04:24.841547] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:14.338 [2024-11-19 03:04:24.841589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f0370 (107): Transport endpoint is not connected 00:23:14.338 [2024-11-19 03:04:24.842565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f0370 (9): Bad file descriptor 00:23:14.338 [2024-11-19 03:04:24.843565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:14.338 [2024-11-19 03:04:24.843585] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:14.338 [2024-11-19 03:04:24.843614] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:14.338 [2024-11-19 03:04:24.843633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:14.338 request: 00:23:14.338 { 00:23:14.338 "name": "TLSTEST", 00:23:14.338 "trtype": "tcp", 00:23:14.338 "traddr": "10.0.0.2", 00:23:14.338 "adrfam": "ipv4", 00:23:14.338 "trsvcid": "4420", 00:23:14.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.338 "prchk_reftag": false, 00:23:14.338 "prchk_guard": false, 00:23:14.338 "hdgst": false, 00:23:14.338 "ddgst": false, 00:23:14.338 "psk": "key0", 00:23:14.338 "allow_unrecognized_csi": false, 00:23:14.338 "method": "bdev_nvme_attach_controller", 00:23:14.338 "req_id": 1 00:23:14.338 } 00:23:14.338 Got JSON-RPC error response 00:23:14.338 response: 00:23:14.338 { 00:23:14.338 "code": -5, 00:23:14.338 "message": "Input/output error" 00:23:14.338 } 00:23:14.338 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275077 00:23:14.338 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275077 ']' 00:23:14.338 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275077 00:23:14.338 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:14.338 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:14.338 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275077 00:23:14.338 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:14.338 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:14.338 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275077' 00:23:14.338 killing process with pid 275077 00:23:14.338 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275077 00:23:14.338 Received shutdown signal, test time was about 10.000000 seconds 00:23:14.338 00:23:14.338 Latency(us) 00:23:14.338 [2024-11-19T02:04:24.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.338 [2024-11-19T02:04:24.953Z] =================================================================================================================== 00:23:14.338 [2024-11-19T02:04:24.953Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:14.338 03:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275077 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3hXEtMil6x 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3hXEtMil6x 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3hXEtMil6x 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3hXEtMil6x 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275214 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275214 /var/tmp/bdevperf.sock 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275214 ']' 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.597 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.598 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.598 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.598 [2024-11-19 03:04:25.143381] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:14.598 [2024-11-19 03:04:25.143462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275214 ] 00:23:14.598 [2024-11-19 03:04:25.209131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.857 [2024-11-19 03:04:25.252331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.857 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.857 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:14.857 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3hXEtMil6x 00:23:15.116 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:15.375 [2024-11-19 03:04:25.903957] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.375 [2024-11-19 03:04:25.914239] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:15.375 [2024-11-19 03:04:25.914270] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:15.375 [2024-11-19 03:04:25.914333] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:15.375 [2024-11-19 03:04:25.914373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x846370 (107): Transport endpoint is not connected 00:23:15.375 [2024-11-19 03:04:25.915364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x846370 (9): Bad file descriptor 00:23:15.375 [2024-11-19 03:04:25.916363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:15.375 [2024-11-19 03:04:25.916384] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:15.375 [2024-11-19 03:04:25.916414] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:15.375 [2024-11-19 03:04:25.916433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:15.375 request: 00:23:15.375 { 00:23:15.375 "name": "TLSTEST", 00:23:15.375 "trtype": "tcp", 00:23:15.375 "traddr": "10.0.0.2", 00:23:15.375 "adrfam": "ipv4", 00:23:15.375 "trsvcid": "4420", 00:23:15.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.375 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:15.375 "prchk_reftag": false, 00:23:15.375 "prchk_guard": false, 00:23:15.375 "hdgst": false, 00:23:15.375 "ddgst": false, 00:23:15.375 "psk": "key0", 00:23:15.375 "allow_unrecognized_csi": false, 00:23:15.375 "method": "bdev_nvme_attach_controller", 00:23:15.375 "req_id": 1 00:23:15.375 } 00:23:15.375 Got JSON-RPC error response 00:23:15.375 response: 00:23:15.375 { 00:23:15.375 "code": -5, 00:23:15.375 "message": "Input/output error" 00:23:15.375 } 00:23:15.375 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275214 00:23:15.375 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275214 ']' 00:23:15.375 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275214 00:23:15.375 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:15.376 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.376 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275214 00:23:15.376 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:15.376 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:15.376 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275214' 00:23:15.376 killing process with pid 275214 00:23:15.376 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275214 00:23:15.376 Received shutdown signal, test time was about 10.000000 seconds 00:23:15.376 00:23:15.376 Latency(us) 00:23:15.376 [2024-11-19T02:04:25.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.376 [2024-11-19T02:04:25.991Z] =================================================================================================================== 00:23:15.376 [2024-11-19T02:04:25.991Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:15.376 03:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275214 00:23:15.634 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:15.634 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:15.634 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:15.634 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:15.634 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:15.634 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3hXEtMil6x 00:23:15.634 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:15.634 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3hXEtMil6x 00:23:15.634 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:15.634 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.634 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:15.635 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.635 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3hXEtMil6x 00:23:15.635 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:15.635 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:15.635 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:15.635 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3hXEtMil6x 00:23:15.635 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.635 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275354 00:23:15.635 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:15.635 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.635 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275354 /var/tmp/bdevperf.sock 00:23:15.635 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275354 ']' 00:23:15.635 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.635 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.635 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.635 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.635 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.635 [2024-11-19 03:04:26.187168] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:15.635 [2024-11-19 03:04:26.187254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275354 ] 00:23:15.894 [2024-11-19 03:04:26.253064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.894 [2024-11-19 03:04:26.298187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.894 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.894 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:15.894 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3hXEtMil6x 00:23:16.153 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.411 [2024-11-19 03:04:26.929078] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.411 [2024-11-19 03:04:26.936826] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:16.411 [2024-11-19 03:04:26.936858] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:16.411 [2024-11-19 03:04:26.936913] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:16.411 [2024-11-19 03:04:26.937072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236d370 (107): Transport endpoint is not connected 00:23:16.411 [2024-11-19 03:04:26.938061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236d370 (9): Bad file descriptor 00:23:16.411 [2024-11-19 03:04:26.939061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:16.411 [2024-11-19 03:04:26.939081] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:16.412 [2024-11-19 03:04:26.939107] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:16.412 [2024-11-19 03:04:26.939132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:16.412 request: 00:23:16.412 { 00:23:16.412 "name": "TLSTEST", 00:23:16.412 "trtype": "tcp", 00:23:16.412 "traddr": "10.0.0.2", 00:23:16.412 "adrfam": "ipv4", 00:23:16.412 "trsvcid": "4420", 00:23:16.412 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:16.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.412 "prchk_reftag": false, 00:23:16.412 "prchk_guard": false, 00:23:16.412 "hdgst": false, 00:23:16.412 "ddgst": false, 00:23:16.412 "psk": "key0", 00:23:16.412 "allow_unrecognized_csi": false, 00:23:16.412 "method": "bdev_nvme_attach_controller", 00:23:16.412 "req_id": 1 00:23:16.412 } 00:23:16.412 Got JSON-RPC error response 00:23:16.412 response: 00:23:16.412 { 00:23:16.412 "code": -5, 00:23:16.412 "message": "Input/output error" 00:23:16.412 } 00:23:16.412 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275354 00:23:16.412 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275354 ']' 00:23:16.412 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275354 00:23:16.412 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:16.412 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.412 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275354 00:23:16.412 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:16.412 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:16.412 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275354' 00:23:16.412 killing process with pid 275354 00:23:16.412 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275354 00:23:16.412 Received shutdown signal, test time was about 10.000000 seconds 00:23:16.412 00:23:16.412 Latency(us) 00:23:16.412 [2024-11-19T02:04:27.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.412 [2024-11-19T02:04:27.027Z] =================================================================================================================== 00:23:16.412 [2024-11-19T02:04:27.027Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:16.412 03:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275354 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275493 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275493 /var/tmp/bdevperf.sock 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275493 ']' 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.670 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.670 [2024-11-19 03:04:27.223163] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:16.670 [2024-11-19 03:04:27.223245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275493 ] 00:23:16.928 [2024-11-19 03:04:27.290581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.928 [2024-11-19 03:04:27.333981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.928 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.928 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:16.928 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:17.187 [2024-11-19 03:04:27.710387] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:17.187 [2024-11-19 03:04:27.710446] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:17.187 request: 00:23:17.187 { 00:23:17.187 "name": "key0", 00:23:17.187 "path": "", 00:23:17.187 "method": "keyring_file_add_key", 00:23:17.187 "req_id": 1 00:23:17.187 } 00:23:17.187 Got JSON-RPC error response 00:23:17.187 response: 00:23:17.187 { 00:23:17.187 "code": -1, 00:23:17.187 "message": "Operation not permitted" 00:23:17.187 } 00:23:17.187 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:17.445 [2024-11-19 03:04:27.979263] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.445 [2024-11-19 03:04:27.979349] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:17.445 request: 00:23:17.445 { 00:23:17.445 "name": "TLSTEST", 00:23:17.445 "trtype": "tcp", 00:23:17.445 "traddr": "10.0.0.2", 00:23:17.445 "adrfam": "ipv4", 00:23:17.445 "trsvcid": "4420", 00:23:17.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:17.445 "prchk_reftag": false, 00:23:17.445 "prchk_guard": false, 00:23:17.445 "hdgst": false, 00:23:17.445 "ddgst": false, 00:23:17.445 "psk": "key0", 00:23:17.445 "allow_unrecognized_csi": false, 00:23:17.445 "method": "bdev_nvme_attach_controller", 00:23:17.445 "req_id": 1 00:23:17.445 } 00:23:17.445 Got JSON-RPC error response 00:23:17.445 response: 00:23:17.445 { 00:23:17.445 "code": -126, 00:23:17.446 "message": "Required key not available" 00:23:17.446 } 00:23:17.446 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275493 00:23:17.446 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275493 ']' 00:23:17.446 03:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275493 00:23:17.446 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.446 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.446 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275493 00:23:17.446 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:17.446 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:17.446 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275493' 00:23:17.446 killing process with pid 275493 00:23:17.446 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275493 00:23:17.446 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.446 00:23:17.446 Latency(us) 00:23:17.446 [2024-11-19T02:04:28.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.446 [2024-11-19T02:04:28.061Z] =================================================================================================================== 00:23:17.446 [2024-11-19T02:04:28.061Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:17.446 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275493 00:23:17.704 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:17.704 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:17.704 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:17.704 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:17.704 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:17.704 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 271824 00:23:17.704 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 271824 ']' 00:23:17.704 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 271824 00:23:17.704 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.704 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.704 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 271824 00:23:17.704 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:17.704 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:17.704 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 271824' 00:23:17.704 killing process with pid 271824 00:23:17.704 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 271824 00:23:17.704 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 271824 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.PF0f2EVTX1 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.PF0f2EVTX1 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=275651 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 275651 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275651 ']' 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.963 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.221 [2024-11-19 03:04:28.582228] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:18.221 [2024-11-19 03:04:28.582310] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.221 [2024-11-19 03:04:28.655814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.221 [2024-11-19 03:04:28.702331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.221 [2024-11-19 03:04:28.702386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.221 [2024-11-19 03:04:28.702399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.221 [2024-11-19 03:04:28.702410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.221 [2024-11-19 03:04:28.702420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.221 [2024-11-19 03:04:28.703009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.221 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.221 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:18.221 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:18.221 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:18.221 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.479 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.479 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.PF0f2EVTX1 00:23:18.479 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PF0f2EVTX1 00:23:18.479 03:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:18.744 [2024-11-19 03:04:29.144014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.744 03:04:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:19.002 03:04:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:19.259 [2024-11-19 03:04:29.785797] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:19.259 [2024-11-19 03:04:29.786073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.259 03:04:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:19.517 malloc0 00:23:19.517 03:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:19.775 03:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PF0f2EVTX1 00:23:20.340 03:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:20.598 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PF0f2EVTX1 00:23:20.598 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:20.598 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:20.598 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:20.598 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PF0f2EVTX1 00:23:20.598 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.598 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275943 00:23:20.598 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.598 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.598 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275943 /var/tmp/bdevperf.sock 00:23:20.598 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275943 ']' 00:23:20.598 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.598 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.598 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.598 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.598 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.598 [2024-11-19 03:04:31.067783] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:20.598 [2024-11-19 03:04:31.067867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275943 ] 00:23:20.598 [2024-11-19 03:04:31.134618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.598 [2024-11-19 03:04:31.180091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.857 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.857 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:20.857 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PF0f2EVTX1 00:23:21.115 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:21.373 [2024-11-19 03:04:31.835162] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:21.373 TLSTESTn1 00:23:21.373 03:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:21.631 Running I/O for 10 seconds... 00:23:23.499 3200.00 IOPS, 12.50 MiB/s [2024-11-19T02:04:35.048Z] 3217.00 IOPS, 12.57 MiB/s [2024-11-19T02:04:36.421Z] 3105.00 IOPS, 12.13 MiB/s [2024-11-19T02:04:37.352Z] 3175.50 IOPS, 12.40 MiB/s [2024-11-19T02:04:38.285Z] 3226.80 IOPS, 12.60 MiB/s [2024-11-19T02:04:39.218Z] 3242.83 IOPS, 12.67 MiB/s [2024-11-19T02:04:40.148Z] 3261.43 IOPS, 12.74 MiB/s [2024-11-19T02:04:41.081Z] 3185.12 IOPS, 12.44 MiB/s [2024-11-19T02:04:42.455Z] 3191.78 IOPS, 12.47 MiB/s [2024-11-19T02:04:42.455Z] 3193.20 IOPS, 12.47 MiB/s 00:23:31.840 Latency(us) 00:23:31.840 [2024-11-19T02:04:42.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.840 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:31.840 Verification LBA range: start 0x0 length 0x2000 00:23:31.840 TLSTESTn1 : 10.02 3199.15 12.50 0.00 0.00 39948.48 5849.69 46409.20 00:23:31.840 [2024-11-19T02:04:42.455Z] =================================================================================================================== 00:23:31.840 [2024-11-19T02:04:42.455Z] Total : 3199.15 12.50 0.00 0.00 39948.48 5849.69 46409.20 00:23:31.840 { 00:23:31.840 "results": [ 00:23:31.840 { 00:23:31.840 "job": "TLSTESTn1", 00:23:31.840 "core_mask": "0x4", 00:23:31.840 "workload": "verify", 00:23:31.840 "status": "finished", 00:23:31.840 "verify_range": { 00:23:31.840 "start": 0, 00:23:31.840 "length": 8192 00:23:31.840 }, 00:23:31.840 "queue_depth": 128, 00:23:31.840 "io_size": 4096, 00:23:31.840 "runtime": 10.021114, 00:23:31.840 "iops": 3199.1453245617204, 00:23:31.840 "mibps": 12.49666142406922, 00:23:31.840 "io_failed": 0, 00:23:31.840 "io_timeout": 0, 00:23:31.840 "avg_latency_us": 39948.47667021337, 00:23:31.840 "min_latency_us": 5849.694814814815, 00:23:31.840 "max_latency_us": 46409.19703703704 00:23:31.840 } 00:23:31.840 ], 00:23:31.840 "core_count": 1 00:23:31.840 } 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 275943 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275943 ']' 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275943 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275943 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275943' 00:23:31.840 killing process with pid 275943 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275943 00:23:31.840 Received shutdown signal, test time was about 10.000000 seconds 00:23:31.840 00:23:31.840 Latency(us) 00:23:31.840 [2024-11-19T02:04:42.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.840 [2024-11-19T02:04:42.455Z] =================================================================================================================== 00:23:31.840 [2024-11-19T02:04:42.455Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275943 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.PF0f2EVTX1 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PF0f2EVTX1 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PF0f2EVTX1 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PF0f2EVTX1 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PF0f2EVTX1 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=277258 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 277258 /var/tmp/bdevperf.sock 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277258 ']' 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.840 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.840 [2024-11-19 03:04:42.381120] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:31.840 [2024-11-19 03:04:42.381201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277258 ] 00:23:31.840 [2024-11-19 03:04:42.448251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.099 [2024-11-19 03:04:42.492395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.099 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.099 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:32.099 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PF0f2EVTX1 00:23:32.357 [2024-11-19 03:04:42.863560] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.PF0f2EVTX1': 0100666 00:23:32.357 [2024-11-19 03:04:42.863600] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:32.357 request: 00:23:32.357 { 00:23:32.357 "name": "key0", 00:23:32.357 "path": "/tmp/tmp.PF0f2EVTX1", 00:23:32.357 "method": "keyring_file_add_key", 00:23:32.357 "req_id": 1 00:23:32.357 } 00:23:32.357 Got JSON-RPC error response 00:23:32.357 response: 00:23:32.357 { 00:23:32.357 "code": -1, 00:23:32.357 "message": "Operation not permitted" 00:23:32.357 } 00:23:32.357 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:32.615 [2024-11-19 03:04:43.152451] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:32.615 [2024-11-19 03:04:43.152511] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:32.615 request: 00:23:32.615 { 00:23:32.615 "name": "TLSTEST", 00:23:32.615 "trtype": "tcp", 00:23:32.615 "traddr": "10.0.0.2", 00:23:32.615 "adrfam": "ipv4", 00:23:32.615 "trsvcid": "4420", 00:23:32.615 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.615 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:32.615 "prchk_reftag": false, 00:23:32.615 "prchk_guard": false, 00:23:32.615 "hdgst": false, 00:23:32.615 "ddgst": false, 00:23:32.615 "psk": "key0", 00:23:32.615 "allow_unrecognized_csi": false, 00:23:32.615 "method": "bdev_nvme_attach_controller", 00:23:32.615 "req_id": 1 00:23:32.615 } 00:23:32.615 Got JSON-RPC error response 00:23:32.615 response: 00:23:32.615 { 00:23:32.615 "code": -126, 00:23:32.615 "message": "Required key not available" 00:23:32.615 } 00:23:32.615 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 277258 00:23:32.615 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277258 ']' 00:23:32.615 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277258 00:23:32.615 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.615 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.615 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277258 00:23:32.615 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:32.615 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:32.615 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277258' 00:23:32.615 killing process with pid 277258 00:23:32.615 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277258 00:23:32.615 Received shutdown signal, test time was about 10.000000 seconds 00:23:32.615 00:23:32.615 Latency(us) 00:23:32.615 [2024-11-19T02:04:43.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.615 [2024-11-19T02:04:43.230Z] =================================================================================================================== 00:23:32.615 [2024-11-19T02:04:43.230Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:32.615 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277258 00:23:32.872 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:32.872 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:32.872 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:32.872 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:32.873 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:32.873 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 275651 00:23:32.873 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275651 ']' 00:23:32.873 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275651 00:23:32.873 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.873 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.873 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275651 00:23:32.873 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:32.873 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:32.873 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275651' 00:23:32.873 killing process with pid 275651 00:23:32.873 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275651 00:23:32.873 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275651 00:23:33.131 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:33.131 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.131 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.131 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.131 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=277406 00:23:33.131 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:33.131 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 277406 00:23:33.131 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277406 ']' 00:23:33.131 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.131 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.131 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.131 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.131 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.131 [2024-11-19 03:04:43.639717] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:33.131 [2024-11-19 03:04:43.639815] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.131 [2024-11-19 03:04:43.713333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.389 [2024-11-19 03:04:43.761906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.389 [2024-11-19 03:04:43.761957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.389 [2024-11-19 03:04:43.761971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.389 [2024-11-19 03:04:43.761982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.389 [2024-11-19 03:04:43.761992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.389 [2024-11-19 03:04:43.762595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.389 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.389 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:33.389 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.389 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.389 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.390 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.390 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.PF0f2EVTX1 00:23:33.390 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:33.390 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.PF0f2EVTX1 00:23:33.390 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:33.390 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.390 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:33.390 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.390 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.PF0f2EVTX1 00:23:33.390 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PF0f2EVTX1 00:23:33.390 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:33.647 [2024-11-19 03:04:44.171856] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.647 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:33.905 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:34.163 [2024-11-19 03:04:44.701291] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:34.163 [2024-11-19 03:04:44.701533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.163 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:34.421 malloc0 00:23:34.421 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:34.679 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PF0f2EVTX1 00:23:34.937 [2024-11-19 03:04:45.513419] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.PF0f2EVTX1': 0100666 00:23:34.937 [2024-11-19 03:04:45.513459] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:34.937 request: 00:23:34.937 { 00:23:34.937 "name": "key0", 00:23:34.937 "path": "/tmp/tmp.PF0f2EVTX1", 00:23:34.937 "method": "keyring_file_add_key", 00:23:34.937 "req_id": 1 00:23:34.937 } 00:23:34.937 Got JSON-RPC error response 00:23:34.937 response: 00:23:34.937 { 00:23:34.937 "code": -1, 00:23:34.937 "message": "Operation not permitted" 00:23:34.937 } 00:23:34.937 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.194 [2024-11-19 03:04:45.778153] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:35.194 [2024-11-19 03:04:45.778211] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:35.194 request: 00:23:35.194 { 00:23:35.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.194 "host": "nqn.2016-06.io.spdk:host1", 00:23:35.194 "psk": "key0", 00:23:35.194 "method": "nvmf_subsystem_add_host", 00:23:35.194 "req_id": 1 00:23:35.194 } 00:23:35.194 Got JSON-RPC error response 00:23:35.194 response: 00:23:35.194 { 00:23:35.194 "code": -32603, 00:23:35.194 "message": "Internal error" 00:23:35.194 } 00:23:35.194 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:35.194 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:35.194 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:35.194 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:35.194 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 277406 00:23:35.194 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277406 ']' 00:23:35.194 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277406 00:23:35.194 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:35.194 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.194 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277406 00:23:35.454 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:35.454 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:35.454 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277406' 00:23:35.454 killing process with pid 277406 00:23:35.454 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277406 00:23:35.454 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277406 00:23:35.454 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.PF0f2EVTX1 00:23:35.454 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:35.454 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:35.454 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.454 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.454 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=277709 00:23:35.454 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:35.454 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 277709 00:23:35.454 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277709 ']' 00:23:35.454 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.454 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.454 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.455 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.455 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.713 [2024-11-19 03:04:46.105537] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:35.713 [2024-11-19 03:04:46.105632] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.713 [2024-11-19 03:04:46.177369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.713 [2024-11-19 03:04:46.216877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.713 [2024-11-19 03:04:46.216939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.713 [2024-11-19 03:04:46.216967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.713 [2024-11-19 03:04:46.216979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.713 [2024-11-19 03:04:46.216995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.713 [2024-11-19 03:04:46.217529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.713 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.713 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:35.713 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:35.976 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.976 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.976 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.976 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.PF0f2EVTX1 00:23:35.976 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PF0f2EVTX1 00:23:35.976 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:36.233 [2024-11-19 03:04:46.601716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.233 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:36.491 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:36.748 [2024-11-19 03:04:47.147185] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.748 [2024-11-19 03:04:47.147429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.748 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:37.006 malloc0 00:23:37.006 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:37.264 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PF0f2EVTX1 00:23:37.522 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:37.781 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=277999 00:23:37.781 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:37.781 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:37.781 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 277999 /var/tmp/bdevperf.sock 00:23:37.781 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277999 ']' 00:23:37.781 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.781 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.781 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.781 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.781 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.781 [2024-11-19 03:04:48.286888] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:37.781 [2024-11-19 03:04:48.286984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277999 ] 00:23:37.781 [2024-11-19 03:04:48.353490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.039 [2024-11-19 03:04:48.401601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.039 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.039 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:38.039 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PF0f2EVTX1 00:23:38.298 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.556 [2024-11-19 03:04:49.050279] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.556 TLSTESTn1 00:23:38.556 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:39.120 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:39.120 "subsystems": [ 00:23:39.120 { 00:23:39.120 "subsystem": "keyring", 00:23:39.120 "config": [ 00:23:39.120 { 00:23:39.120 "method": "keyring_file_add_key", 00:23:39.120 "params": { 00:23:39.120 "name": "key0", 00:23:39.120 "path": "/tmp/tmp.PF0f2EVTX1" 00:23:39.120 } 00:23:39.120 } 00:23:39.120 ] 00:23:39.120 }, 00:23:39.120 { 00:23:39.120 "subsystem": "iobuf", 00:23:39.120 "config": [ 00:23:39.120 { 00:23:39.120 "method": "iobuf_set_options", 00:23:39.120 "params": { 00:23:39.120 "small_pool_count": 8192, 00:23:39.120 "large_pool_count": 1024, 00:23:39.120 "small_bufsize": 8192, 00:23:39.120 "large_bufsize": 135168, 00:23:39.120 "enable_numa": false 00:23:39.120 } 00:23:39.120 } 00:23:39.120 ] 00:23:39.120 }, 00:23:39.120 { 00:23:39.120 "subsystem": "sock", 00:23:39.120 "config": [ 00:23:39.120 { 00:23:39.120 "method": "sock_set_default_impl", 00:23:39.120 "params": { 00:23:39.120 "impl_name": "posix" 00:23:39.120 } 00:23:39.120 }, 00:23:39.120 { 00:23:39.120 "method": "sock_impl_set_options", 00:23:39.120 "params": { 00:23:39.120 "impl_name": "ssl", 00:23:39.120 "recv_buf_size": 4096, 00:23:39.120 "send_buf_size": 4096, 00:23:39.120 "enable_recv_pipe": true, 00:23:39.120 "enable_quickack": false, 00:23:39.120 "enable_placement_id": 0, 00:23:39.120 "enable_zerocopy_send_server": true, 00:23:39.120 "enable_zerocopy_send_client": false, 00:23:39.120 "zerocopy_threshold": 0, 00:23:39.120 "tls_version": 0, 00:23:39.120 "enable_ktls": false 00:23:39.120 } 00:23:39.120 }, 00:23:39.120 { 00:23:39.120 "method": "sock_impl_set_options", 00:23:39.120 "params": { 00:23:39.120 "impl_name": "posix", 00:23:39.120 "recv_buf_size": 2097152, 00:23:39.120 "send_buf_size": 2097152, 00:23:39.120 "enable_recv_pipe": true, 00:23:39.120 "enable_quickack": false, 00:23:39.120 "enable_placement_id": 0, 00:23:39.120 "enable_zerocopy_send_server": true, 00:23:39.120 "enable_zerocopy_send_client": false, 00:23:39.120 "zerocopy_threshold": 0, 00:23:39.120 "tls_version": 0, 00:23:39.120 "enable_ktls": false 00:23:39.120 } 00:23:39.120 } 00:23:39.120 ] 00:23:39.120 }, 00:23:39.120 { 00:23:39.120 "subsystem": "vmd", 00:23:39.120 "config": [] 00:23:39.120 }, 00:23:39.120 { 00:23:39.120 "subsystem": "accel", 00:23:39.120 "config": [ 00:23:39.120 { 00:23:39.120 "method": "accel_set_options", 00:23:39.120 "params": { 00:23:39.120 "small_cache_size": 128, 00:23:39.120 "large_cache_size": 16, 00:23:39.120 "task_count": 2048, 00:23:39.120 "sequence_count": 2048, 00:23:39.120 "buf_count": 2048 00:23:39.120 } 00:23:39.120 } 00:23:39.120 ] 00:23:39.120 }, 00:23:39.120 { 00:23:39.120 "subsystem": "bdev", 00:23:39.120 "config": [ 00:23:39.120 { 00:23:39.120 "method": "bdev_set_options", 00:23:39.120 "params": { 00:23:39.120 "bdev_io_pool_size": 65535, 00:23:39.120 "bdev_io_cache_size": 256, 00:23:39.120 "bdev_auto_examine": true, 00:23:39.120 "iobuf_small_cache_size": 128, 00:23:39.120 "iobuf_large_cache_size": 16 00:23:39.120 } 00:23:39.120 }, 00:23:39.120 { 00:23:39.120 "method": "bdev_raid_set_options", 00:23:39.120 "params": { 00:23:39.120 "process_window_size_kb": 1024, 00:23:39.120 "process_max_bandwidth_mb_sec": 0 00:23:39.120 } 00:23:39.120 }, 00:23:39.120 { 00:23:39.120 "method": "bdev_iscsi_set_options", 00:23:39.120 "params": { 00:23:39.120 "timeout_sec": 30 00:23:39.120 } 00:23:39.120 }, 00:23:39.120 { 00:23:39.120 "method": "bdev_nvme_set_options", 00:23:39.120 "params": { 00:23:39.120 "action_on_timeout": "none", 00:23:39.120 "timeout_us": 0, 00:23:39.120 "timeout_admin_us": 0, 00:23:39.120 "keep_alive_timeout_ms": 10000, 00:23:39.120 "arbitration_burst": 0, 00:23:39.120 "low_priority_weight": 0, 00:23:39.120 "medium_priority_weight": 0, 00:23:39.120 "high_priority_weight": 0, 00:23:39.120 "nvme_adminq_poll_period_us": 10000, 00:23:39.120 "nvme_ioq_poll_period_us": 0, 00:23:39.120 "io_queue_requests": 0, 00:23:39.120 "delay_cmd_submit": true, 00:23:39.120 "transport_retry_count": 4, 00:23:39.120 "bdev_retry_count": 3, 00:23:39.120 "transport_ack_timeout": 0, 00:23:39.120 "ctrlr_loss_timeout_sec": 0, 00:23:39.120 "reconnect_delay_sec": 0, 00:23:39.120 "fast_io_fail_timeout_sec": 0, 00:23:39.120 "disable_auto_failback": false, 00:23:39.120 "generate_uuids": false, 00:23:39.120 "transport_tos": 0, 00:23:39.120 "nvme_error_stat": false, 00:23:39.120 "rdma_srq_size": 0, 00:23:39.120 "io_path_stat": false, 00:23:39.121 "allow_accel_sequence": false, 00:23:39.121 "rdma_max_cq_size": 0, 00:23:39.121 "rdma_cm_event_timeout_ms": 0, 00:23:39.121 "dhchap_digests": [ 00:23:39.121 "sha256", 00:23:39.121 "sha384", 00:23:39.121 "sha512" 00:23:39.121 ], 00:23:39.121 "dhchap_dhgroups": [ 00:23:39.121 "null", 00:23:39.121 "ffdhe2048", 00:23:39.121 "ffdhe3072", 00:23:39.121 "ffdhe4096", 00:23:39.121 "ffdhe6144", 00:23:39.121 "ffdhe8192" 00:23:39.121 ] 00:23:39.121 } 00:23:39.121 }, 00:23:39.121 { 00:23:39.121 "method": "bdev_nvme_set_hotplug", 00:23:39.121 "params": { 00:23:39.121 "period_us": 100000, 00:23:39.121 "enable": false 00:23:39.121 } 00:23:39.121 }, 00:23:39.121 { 00:23:39.121 "method": "bdev_malloc_create", 00:23:39.121 "params": { 00:23:39.121 "name": "malloc0", 00:23:39.121 "num_blocks": 8192, 00:23:39.121 "block_size": 4096, 00:23:39.121 "physical_block_size": 4096, 00:23:39.121 "uuid": "d16f7811-5a91-4aeb-9614-cb92561fe73e", 00:23:39.121 "optimal_io_boundary": 0, 00:23:39.121 "md_size": 0, 00:23:39.121 "dif_type": 0, 00:23:39.121 "dif_is_head_of_md": false, 00:23:39.121 "dif_pi_format": 0 00:23:39.121 } 00:23:39.121 }, 00:23:39.121 { 00:23:39.121 "method": "bdev_wait_for_examine" 00:23:39.121 } 00:23:39.121 ] 00:23:39.121 }, 00:23:39.121 { 00:23:39.121 "subsystem": "nbd", 00:23:39.121 "config": [] 00:23:39.121 }, 00:23:39.121 { 00:23:39.121 "subsystem": "scheduler", 00:23:39.121 "config": [ 00:23:39.121 { 00:23:39.121 "method": "framework_set_scheduler", 00:23:39.121 "params": { 00:23:39.121 "name": "static" 00:23:39.121 } 00:23:39.121 } 00:23:39.121 ] 00:23:39.121 }, 00:23:39.121 { 00:23:39.121 "subsystem": "nvmf", 00:23:39.121 "config": [ 00:23:39.121 { 00:23:39.121 "method": "nvmf_set_config", 00:23:39.121 "params": { 00:23:39.121 "discovery_filter": "match_any", 00:23:39.121 "admin_cmd_passthru": { 00:23:39.121 "identify_ctrlr": false 00:23:39.121 }, 00:23:39.121 "dhchap_digests": [ 00:23:39.121 "sha256", 00:23:39.121 "sha384", 00:23:39.121 "sha512" 00:23:39.121 ], 00:23:39.121 "dhchap_dhgroups": [ 00:23:39.121 "null", 00:23:39.121 "ffdhe2048", 00:23:39.121 "ffdhe3072", 00:23:39.121 "ffdhe4096", 00:23:39.121 "ffdhe6144", 00:23:39.121 "ffdhe8192" 00:23:39.121 ] 00:23:39.121 } 00:23:39.121 }, 00:23:39.121 { 00:23:39.121 "method": "nvmf_set_max_subsystems", 00:23:39.121 "params": { 00:23:39.121 "max_subsystems": 1024 00:23:39.121 } 00:23:39.121 }, 00:23:39.121 { 00:23:39.121 "method": "nvmf_set_crdt", 00:23:39.121 "params": { 00:23:39.121 "crdt1": 0, 00:23:39.121 "crdt2": 0, 00:23:39.121 "crdt3": 0 00:23:39.121 } 00:23:39.121 }, 00:23:39.121 { 00:23:39.121 "method": "nvmf_create_transport", 00:23:39.121 "params": { 00:23:39.121 "trtype": "TCP", 00:23:39.121 "max_queue_depth": 128, 00:23:39.121 "max_io_qpairs_per_ctrlr": 127, 00:23:39.121 "in_capsule_data_size": 4096, 00:23:39.121 "max_io_size": 131072, 00:23:39.121 "io_unit_size": 131072, 00:23:39.121 "max_aq_depth": 128, 00:23:39.121 "num_shared_buffers": 511, 00:23:39.121 "buf_cache_size": 4294967295, 00:23:39.121 "dif_insert_or_strip": false, 00:23:39.121 "zcopy": false, 00:23:39.121 "c2h_success": false, 00:23:39.121 "sock_priority": 0, 00:23:39.121 "abort_timeout_sec": 1, 00:23:39.121 "ack_timeout": 0, 00:23:39.121 "data_wr_pool_size": 0 00:23:39.121 } 00:23:39.121 }, 00:23:39.121 { 00:23:39.121 "method": "nvmf_create_subsystem", 00:23:39.121 "params": { 00:23:39.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.121 "allow_any_host": false, 00:23:39.121 "serial_number": "SPDK00000000000001", 00:23:39.121 "model_number": "SPDK bdev Controller", 00:23:39.121 "max_namespaces": 10, 00:23:39.121 "min_cntlid": 1, 00:23:39.121 "max_cntlid": 65519, 00:23:39.121 "ana_reporting": false 00:23:39.121 } 00:23:39.121 }, 00:23:39.121 { 00:23:39.121 "method": "nvmf_subsystem_add_host", 00:23:39.121 "params": { 00:23:39.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.121 "host": "nqn.2016-06.io.spdk:host1", 00:23:39.121 "psk": "key0" 00:23:39.121 } 00:23:39.121 }, 00:23:39.121 { 00:23:39.121 "method": "nvmf_subsystem_add_ns", 00:23:39.121 "params": { 00:23:39.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.121 "namespace": { 00:23:39.121 "nsid": 1, 00:23:39.121 "bdev_name": "malloc0", 00:23:39.121 "nguid": "D16F78115A914AEB9614CB92561FE73E", 00:23:39.121 "uuid": "d16f7811-5a91-4aeb-9614-cb92561fe73e", 00:23:39.121 "no_auto_visible": false 00:23:39.121 } 00:23:39.121 } 00:23:39.121 }, 00:23:39.121 { 00:23:39.121 "method": "nvmf_subsystem_add_listener", 00:23:39.121 "params": { 00:23:39.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.121 "listen_address": { 00:23:39.121 "trtype": "TCP", 00:23:39.121 "adrfam": "IPv4", 00:23:39.121 "traddr": "10.0.0.2", 00:23:39.121 "trsvcid": "4420" 00:23:39.121 }, 00:23:39.121 "secure_channel": true 00:23:39.121 } 00:23:39.121 } 00:23:39.121 ] 00:23:39.121 } 00:23:39.121 ] 00:23:39.121 }' 00:23:39.121 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:39.380 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:39.380 "subsystems": [ 00:23:39.380 { 00:23:39.381 "subsystem": "keyring", 00:23:39.381 "config": [ 00:23:39.381 { 00:23:39.381 "method": "keyring_file_add_key", 00:23:39.381 "params": { 00:23:39.381 "name": "key0", 00:23:39.381 "path": "/tmp/tmp.PF0f2EVTX1" 00:23:39.381 } 00:23:39.381 } 00:23:39.381 ] 00:23:39.381 }, 00:23:39.381 { 00:23:39.381 "subsystem": "iobuf", 00:23:39.381 "config": [ 00:23:39.381 { 00:23:39.381 "method": "iobuf_set_options", 00:23:39.381 "params": { 00:23:39.381 "small_pool_count": 8192, 00:23:39.381 "large_pool_count": 1024, 00:23:39.381 "small_bufsize": 8192, 00:23:39.381 "large_bufsize": 135168, 00:23:39.381 "enable_numa": false 00:23:39.381 } 00:23:39.381 } 00:23:39.381 ] 00:23:39.381 }, 00:23:39.381 { 00:23:39.381 "subsystem": "sock", 00:23:39.381 "config": [ 00:23:39.381 { 00:23:39.381 "method": "sock_set_default_impl", 00:23:39.381 "params": { 00:23:39.381 "impl_name": "posix" 00:23:39.381 } 00:23:39.381 }, 00:23:39.381 { 00:23:39.381 "method": "sock_impl_set_options", 00:23:39.381 "params": { 00:23:39.381 "impl_name": "ssl", 00:23:39.381 "recv_buf_size": 4096, 00:23:39.381 "send_buf_size": 4096, 00:23:39.381 "enable_recv_pipe": true, 00:23:39.381 "enable_quickack": false, 00:23:39.381 "enable_placement_id": 0, 00:23:39.381 "enable_zerocopy_send_server": true, 00:23:39.381 "enable_zerocopy_send_client": false, 00:23:39.381 "zerocopy_threshold": 0, 00:23:39.381 "tls_version": 0, 00:23:39.381 "enable_ktls": false 00:23:39.381 } 00:23:39.381 }, 00:23:39.381 { 00:23:39.381 "method": "sock_impl_set_options", 00:23:39.381 "params": { 00:23:39.381 "impl_name": "posix", 00:23:39.381 "recv_buf_size": 2097152, 00:23:39.381 "send_buf_size": 2097152, 00:23:39.381 "enable_recv_pipe": true, 00:23:39.381 "enable_quickack": false, 00:23:39.381 "enable_placement_id": 0, 00:23:39.381 "enable_zerocopy_send_server": true, 00:23:39.381 "enable_zerocopy_send_client": false, 00:23:39.381 "zerocopy_threshold": 0, 00:23:39.381 "tls_version": 0, 00:23:39.381 "enable_ktls": false 00:23:39.381 } 00:23:39.381 } 00:23:39.381 ] 00:23:39.381 }, 00:23:39.381 { 00:23:39.381 "subsystem": "vmd", 00:23:39.381 "config": [] 00:23:39.381 }, 00:23:39.381 { 00:23:39.381 "subsystem": "accel", 00:23:39.381 "config": [ 00:23:39.381 { 00:23:39.381 "method": "accel_set_options", 00:23:39.381 "params": { 00:23:39.381 "small_cache_size": 128, 00:23:39.381 "large_cache_size": 16, 00:23:39.381 "task_count": 2048, 00:23:39.381 "sequence_count": 2048, 00:23:39.381 "buf_count": 2048 00:23:39.381 } 00:23:39.381 } 00:23:39.381 ] 00:23:39.381 }, 00:23:39.381 { 00:23:39.381 "subsystem": "bdev", 00:23:39.381 "config": [ 00:23:39.381 { 00:23:39.381 "method": "bdev_set_options", 00:23:39.381 "params": { 00:23:39.381 "bdev_io_pool_size": 65535, 00:23:39.381 "bdev_io_cache_size": 256, 00:23:39.381 "bdev_auto_examine": true, 00:23:39.381 "iobuf_small_cache_size": 128, 00:23:39.381 "iobuf_large_cache_size": 16 00:23:39.381 } 00:23:39.381 }, 00:23:39.381 { 00:23:39.381 "method": "bdev_raid_set_options", 00:23:39.381 "params": { 00:23:39.381 "process_window_size_kb": 1024, 00:23:39.381 "process_max_bandwidth_mb_sec": 0 00:23:39.381 } 00:23:39.381 }, 00:23:39.381 { 00:23:39.381 "method": "bdev_iscsi_set_options", 00:23:39.381 "params": { 00:23:39.381 "timeout_sec": 30 00:23:39.381 } 00:23:39.381 }, 00:23:39.381 { 00:23:39.381 "method": "bdev_nvme_set_options", 00:23:39.381 "params": { 00:23:39.381 "action_on_timeout": "none", 00:23:39.381 "timeout_us": 0, 00:23:39.381 "timeout_admin_us": 0, 00:23:39.381 "keep_alive_timeout_ms": 10000, 00:23:39.381 "arbitration_burst": 0, 00:23:39.381 "low_priority_weight": 0, 00:23:39.381 "medium_priority_weight": 0, 00:23:39.381 "high_priority_weight": 0, 00:23:39.381 "nvme_adminq_poll_period_us": 10000, 00:23:39.381 "nvme_ioq_poll_period_us": 0, 00:23:39.381 "io_queue_requests": 512, 00:23:39.381 "delay_cmd_submit": true, 00:23:39.381 "transport_retry_count": 4, 00:23:39.381 "bdev_retry_count": 3, 00:23:39.381 "transport_ack_timeout": 0, 00:23:39.381 "ctrlr_loss_timeout_sec": 0, 00:23:39.381 "reconnect_delay_sec": 0, 00:23:39.381 "fast_io_fail_timeout_sec": 0, 00:23:39.381 "disable_auto_failback": false, 00:23:39.381 "generate_uuids": false, 00:23:39.381 "transport_tos": 0, 00:23:39.381 "nvme_error_stat": false, 00:23:39.381 "rdma_srq_size": 0, 00:23:39.381 "io_path_stat": false, 00:23:39.381 "allow_accel_sequence": false, 00:23:39.381 "rdma_max_cq_size": 0, 00:23:39.381 "rdma_cm_event_timeout_ms": 0, 00:23:39.381 "dhchap_digests": [ 00:23:39.381 "sha256", 00:23:39.381 "sha384", 00:23:39.381 "sha512" 00:23:39.381 ], 00:23:39.381 "dhchap_dhgroups": [ 00:23:39.381 "null", 00:23:39.381 "ffdhe2048", 00:23:39.381 "ffdhe3072", 00:23:39.381 "ffdhe4096", 00:23:39.381 "ffdhe6144", 00:23:39.381 "ffdhe8192" 00:23:39.381 ] 00:23:39.381 } 00:23:39.381 }, 00:23:39.381 { 00:23:39.381 "method": "bdev_nvme_attach_controller", 00:23:39.381 "params": { 00:23:39.381 "name": "TLSTEST", 00:23:39.381 "trtype": "TCP", 00:23:39.381 "adrfam": "IPv4", 00:23:39.381 "traddr": "10.0.0.2", 00:23:39.381 "trsvcid": "4420", 00:23:39.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.381 "prchk_reftag": false, 00:23:39.381 "prchk_guard": false, 00:23:39.381 "ctrlr_loss_timeout_sec": 0, 00:23:39.381 "reconnect_delay_sec": 0, 00:23:39.381 "fast_io_fail_timeout_sec": 0, 00:23:39.381 "psk": "key0", 00:23:39.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:39.381 "hdgst": false, 00:23:39.381 "ddgst": false, 00:23:39.381 "multipath": "multipath" 00:23:39.381 } 00:23:39.381 }, 00:23:39.381 { 00:23:39.381 "method": "bdev_nvme_set_hotplug", 00:23:39.381 "params": { 00:23:39.381 "period_us": 100000, 00:23:39.381 "enable": false 00:23:39.381 } 00:23:39.381 }, 00:23:39.381 { 00:23:39.381 "method": "bdev_wait_for_examine" 00:23:39.381 } 00:23:39.381 ] 00:23:39.381 }, 00:23:39.381 { 00:23:39.381 "subsystem": "nbd", 00:23:39.382 "config": [] 00:23:39.382 } 00:23:39.382 ] 00:23:39.382 }' 00:23:39.382 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 277999 00:23:39.382 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277999 ']' 00:23:39.382 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277999 00:23:39.382 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:39.382 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.382 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277999 00:23:39.382 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:39.382 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:39.382 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277999' 00:23:39.382 killing process with pid 277999 00:23:39.382 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277999 00:23:39.382 Received shutdown signal, test time was about 10.000000 seconds 00:23:39.382 00:23:39.382 Latency(us) 00:23:39.382 [2024-11-19T02:04:49.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.382 [2024-11-19T02:04:49.997Z] =================================================================================================================== 00:23:39.382 [2024-11-19T02:04:49.997Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:39.382 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277999 00:23:39.640 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 277709 00:23:39.640 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277709 ']' 00:23:39.640 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277709 00:23:39.640 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:39.640 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.640 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277709 00:23:39.640 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:39.640 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:39.640 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277709' 00:23:39.640 killing process with pid 277709 00:23:39.640 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277709 00:23:39.640 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277709 00:23:39.898 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:39.898 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.898 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:39.898 "subsystems": [ 00:23:39.898 { 00:23:39.898 "subsystem": "keyring", 00:23:39.898 "config": [ 00:23:39.898 { 00:23:39.898 "method": "keyring_file_add_key", 00:23:39.898 "params": { 00:23:39.898 "name": "key0", 00:23:39.898 "path": "/tmp/tmp.PF0f2EVTX1" 00:23:39.898 } 00:23:39.898 } 00:23:39.898 ] 00:23:39.898 }, 00:23:39.898 { 00:23:39.898 "subsystem": "iobuf", 00:23:39.898 "config": [ 00:23:39.898 { 00:23:39.898 "method": "iobuf_set_options", 00:23:39.898 "params": { 00:23:39.898 "small_pool_count": 8192, 00:23:39.898 "large_pool_count": 1024, 00:23:39.898 "small_bufsize": 8192, 00:23:39.898 "large_bufsize": 135168, 00:23:39.898 "enable_numa": false 00:23:39.898 } 00:23:39.898 } 00:23:39.898 ] 00:23:39.898 }, 00:23:39.898 { 00:23:39.898 "subsystem": "sock", 00:23:39.898 "config": [ 00:23:39.898 { 00:23:39.898 "method": "sock_set_default_impl", 00:23:39.898 "params": { 00:23:39.898 "impl_name": "posix" 00:23:39.898 } 00:23:39.898 }, 00:23:39.898 { 00:23:39.898 "method": "sock_impl_set_options", 00:23:39.898 "params": { 00:23:39.898 "impl_name": "ssl", 00:23:39.898 "recv_buf_size": 4096, 00:23:39.898 "send_buf_size": 4096, 00:23:39.898 "enable_recv_pipe": true, 00:23:39.898 "enable_quickack": false, 00:23:39.898 "enable_placement_id": 0, 00:23:39.898 "enable_zerocopy_send_server": true, 00:23:39.898 "enable_zerocopy_send_client": false, 00:23:39.898 "zerocopy_threshold": 0, 00:23:39.898 "tls_version": 0, 00:23:39.898 "enable_ktls": false 00:23:39.898 } 00:23:39.898 }, 00:23:39.898 { 00:23:39.898 "method": "sock_impl_set_options", 00:23:39.898 "params": { 00:23:39.898 "impl_name": "posix", 00:23:39.898 "recv_buf_size": 2097152, 00:23:39.898 "send_buf_size": 2097152, 00:23:39.898 "enable_recv_pipe": true, 00:23:39.898 "enable_quickack": false, 00:23:39.898 "enable_placement_id": 0, 00:23:39.898 "enable_zerocopy_send_server": true, 00:23:39.898 "enable_zerocopy_send_client": false, 00:23:39.899 "zerocopy_threshold": 0, 00:23:39.899 "tls_version": 0, 00:23:39.899 "enable_ktls": false 00:23:39.899 } 00:23:39.899 } 00:23:39.899 ] 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "subsystem": "vmd", 00:23:39.899 "config": [] 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "subsystem": "accel", 00:23:39.899 "config": [ 00:23:39.899 { 00:23:39.899 "method": "accel_set_options", 00:23:39.899 "params": { 00:23:39.899 "small_cache_size": 128, 00:23:39.899 "large_cache_size": 16, 00:23:39.899 "task_count": 2048, 00:23:39.899 "sequence_count": 2048, 00:23:39.899 "buf_count": 2048 00:23:39.899 } 00:23:39.899 } 00:23:39.899 ] 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "subsystem": "bdev", 00:23:39.899 "config": [ 00:23:39.899 { 00:23:39.899 "method": "bdev_set_options", 00:23:39.899 "params": { 00:23:39.899 "bdev_io_pool_size": 65535, 00:23:39.899 "bdev_io_cache_size": 256, 00:23:39.899 "bdev_auto_examine": true, 00:23:39.899 "iobuf_small_cache_size": 128, 00:23:39.899 "iobuf_large_cache_size": 16 00:23:39.899 } 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "method": "bdev_raid_set_options", 00:23:39.899 "params": { 00:23:39.899 "process_window_size_kb": 1024, 00:23:39.899 "process_max_bandwidth_mb_sec": 0 00:23:39.899 } 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "method": "bdev_iscsi_set_options", 00:23:39.899 "params": { 00:23:39.899 "timeout_sec": 30 00:23:39.899 } 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "method": "bdev_nvme_set_options", 00:23:39.899 "params": { 00:23:39.899 "action_on_timeout": "none", 00:23:39.899 "timeout_us": 0, 00:23:39.899 "timeout_admin_us": 0, 00:23:39.899 "keep_alive_timeout_ms": 10000, 00:23:39.899 "arbitration_burst": 0, 00:23:39.899 "low_priority_weight": 0, 00:23:39.899 "medium_priority_weight": 0, 00:23:39.899 "high_priority_weight": 0, 00:23:39.899 "nvme_adminq_poll_period_us": 10000, 00:23:39.899 "nvme_ioq_poll_period_us": 0, 00:23:39.899 "io_queue_requests": 0, 00:23:39.899 "delay_cmd_submit": true, 00:23:39.899 "transport_retry_count": 4, 00:23:39.899 "bdev_retry_count": 3, 00:23:39.899 "transport_ack_timeout": 0, 00:23:39.899 "ctrlr_loss_timeout_sec": 0, 00:23:39.899 "reconnect_delay_sec": 0, 00:23:39.899 "fast_io_fail_timeout_sec": 0, 00:23:39.899 "disable_auto_failback": false, 00:23:39.899 "generate_uuids": false, 00:23:39.899 "transport_tos": 0, 00:23:39.899 "nvme_error_stat": false, 00:23:39.899 "rdma_srq_size": 0, 00:23:39.899 "io_path_stat": false, 00:23:39.899 "allow_accel_sequence": false, 00:23:39.899 "rdma_max_cq_size": 0, 00:23:39.899 "rdma_cm_event_timeout_ms": 0, 00:23:39.899 "dhchap_digests": [ 00:23:39.899 "sha256", 00:23:39.899 "sha384", 00:23:39.899 "sha512" 00:23:39.899 ], 00:23:39.899 "dhchap_dhgroups": [ 00:23:39.899 "null", 00:23:39.899 "ffdhe2048", 00:23:39.899 "ffdhe3072", 00:23:39.899 "ffdhe4096", 00:23:39.899 "ffdhe6144", 00:23:39.899 "ffdhe8192" 00:23:39.899 ] 00:23:39.899 } 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "method": "bdev_nvme_set_hotplug", 00:23:39.899 "params": { 00:23:39.899 "period_us": 100000, 00:23:39.899 "enable": false 00:23:39.899 } 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "method": "bdev_malloc_create", 00:23:39.899 "params": { 00:23:39.899 "name": "malloc0", 00:23:39.899 "num_blocks": 8192, 00:23:39.899 "block_size": 4096, 00:23:39.899 "physical_block_size": 4096, 00:23:39.899 "uuid": "d16f7811-5a91-4aeb-9614-cb92561fe73e", 00:23:39.899 "optimal_io_boundary": 0, 00:23:39.899 "md_size": 0, 00:23:39.899 "dif_type": 0, 00:23:39.899 "dif_is_head_of_md": false, 00:23:39.899 "dif_pi_format": 0 00:23:39.899 } 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "method": "bdev_wait_for_examine" 00:23:39.899 } 00:23:39.899 ] 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "subsystem": "nbd", 00:23:39.899 "config": [] 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "subsystem": "scheduler", 00:23:39.899 "config": [ 00:23:39.899 { 00:23:39.899 "method": "framework_set_scheduler", 00:23:39.899 "params": { 00:23:39.899 "name": "static" 00:23:39.899 } 00:23:39.899 } 00:23:39.899 ] 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "subsystem": "nvmf", 00:23:39.899 "config": [ 00:23:39.899 { 00:23:39.899 "method": "nvmf_set_config", 00:23:39.899 "params": { 00:23:39.899 "discovery_filter": "match_any", 00:23:39.899 "admin_cmd_passthru": { 00:23:39.899 "identify_ctrlr": false 00:23:39.899 }, 00:23:39.899 "dhchap_digests": [ 00:23:39.899 "sha256", 00:23:39.899 "sha384", 00:23:39.899 "sha512" 00:23:39.899 ], 00:23:39.899 "dhchap_dhgroups": [ 00:23:39.899 "null", 00:23:39.899 "ffdhe2048", 00:23:39.899 "ffdhe3072", 00:23:39.899 "ffdhe4096", 00:23:39.899 "ffdhe6144", 00:23:39.899 "ffdhe8192" 00:23:39.899 ] 00:23:39.899 } 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "method": "nvmf_set_max_subsystems", 00:23:39.899 "params": { 00:23:39.899 "max_subsystems": 1024 00:23:39.899 } 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "method": "nvmf_set_crdt", 00:23:39.899 "params": { 00:23:39.899 "crdt1": 0, 00:23:39.899 "crdt2": 0, 00:23:39.899 "crdt3": 0 00:23:39.899 } 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "method": "nvmf_create_transport", 00:23:39.899 "params": { 00:23:39.899 "trtype": "TCP", 00:23:39.899 "max_queue_depth": 128, 00:23:39.899 "max_io_qpairs_per_ctrlr": 127, 00:23:39.899 "in_capsule_data_size": 4096, 00:23:39.899 "max_io_size": 131072, 00:23:39.899 "io_unit_size": 131072, 00:23:39.899 "max_aq_depth": 128, 00:23:39.899 "num_shared_buffers": 511, 00:23:39.899 "buf_cache_size": 4294967295, 00:23:39.899 "dif_insert_or_strip": false, 00:23:39.899 "zcopy": false, 00:23:39.899 "c2h_success": false, 00:23:39.899 "sock_priority": 0, 00:23:39.899 "abort_timeout_sec": 1, 00:23:39.899 "ack_timeout": 0, 00:23:39.899 "data_wr_pool_size": 0 00:23:39.899 } 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "method": "nvmf_create_subsystem", 00:23:39.899 "params": { 00:23:39.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.899 "allow_any_host": false, 00:23:39.899 "serial_number": "SPDK00000000000001", 00:23:39.899 "model_number": "SPDK bdev Controller", 00:23:39.899 "max_namespaces": 10, 00:23:39.899 "min_cntlid": 1, 00:23:39.899 "max_cntlid": 65519, 00:23:39.899 "ana_reporting": false 00:23:39.899 } 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "method": "nvmf_subsystem_add_host", 00:23:39.899 "params": { 00:23:39.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.899 "host": "nqn.2016-06.io.spdk:host1", 00:23:39.899 "psk": "key0" 00:23:39.899 } 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "method": "nvmf_subsystem_add_ns", 00:23:39.899 "params": { 00:23:39.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.899 "namespace": { 00:23:39.899 "nsid": 1, 00:23:39.899 "bdev_name": "malloc0", 00:23:39.899 "nguid": "D16F78115A914AEB9614CB92561FE73E", 00:23:39.899 "uuid": "d16f7811-5a91-4aeb-9614-cb92561fe73e", 00:23:39.899 "no_auto_visible": false 00:23:39.899 } 00:23:39.899 } 00:23:39.899 }, 00:23:39.899 { 00:23:39.899 "method": "nvmf_subsystem_add_listener", 00:23:39.899 "params": { 00:23:39.900 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.900 "listen_address": { 00:23:39.900 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.900 "trtype": "TCP", 00:23:39.900 "adrfam": "IPv4", 00:23:39.900 "traddr": "10.0.0.2", 00:23:39.900 "trsvcid": "4420" 00:23:39.900 }, 00:23:39.900 "secure_channel": true 00:23:39.900 } 00:23:39.900 } 00:23:39.900 ] 00:23:39.900 } 00:23:39.900 ] 00:23:39.900 }' 00:23:39.900 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.900 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=278272 00:23:39.900 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:39.900 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 278272 00:23:39.900 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 278272 ']' 00:23:39.900 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.900 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.900 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.900 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.900 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.900 [2024-11-19 03:04:50.373950] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:39.900 [2024-11-19 03:04:50.374043] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.900 [2024-11-19 03:04:50.447127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.900 [2024-11-19 03:04:50.491716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.900 [2024-11-19 03:04:50.491776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.900 [2024-11-19 03:04:50.491805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.900 [2024-11-19 03:04:50.491817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.900 [2024-11-19 03:04:50.491827] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.900 [2024-11-19 03:04:50.492446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.159 [2024-11-19 03:04:50.732459] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.159 [2024-11-19 03:04:50.764483] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.159 [2024-11-19 03:04:50.764750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.093 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.093 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:41.093 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.093 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.093 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.093 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.093 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=278422 00:23:41.093 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 278422 /var/tmp/bdevperf.sock 00:23:41.093 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 278422 ']' 00:23:41.093 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.093 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:41.093 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.093 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:41.093 "subsystems": [ 00:23:41.093 { 00:23:41.093 "subsystem": "keyring", 00:23:41.093 "config": [ 00:23:41.093 { 00:23:41.093 "method": "keyring_file_add_key", 00:23:41.093 "params": { 00:23:41.093 "name": "key0", 00:23:41.093 "path": "/tmp/tmp.PF0f2EVTX1" 00:23:41.093 } 00:23:41.093 } 00:23:41.093 ] 00:23:41.093 }, 00:23:41.093 { 00:23:41.093 "subsystem": "iobuf", 00:23:41.093 "config": [ 00:23:41.093 { 00:23:41.093 "method": "iobuf_set_options", 00:23:41.093 "params": { 00:23:41.093 "small_pool_count": 8192, 00:23:41.093 "large_pool_count": 1024, 00:23:41.093 "small_bufsize": 8192, 00:23:41.093 "large_bufsize": 135168, 00:23:41.093 "enable_numa": false 00:23:41.093 } 00:23:41.093 } 00:23:41.093 ] 00:23:41.093 }, 00:23:41.093 { 00:23:41.093 "subsystem": "sock", 00:23:41.093 "config": [ 00:23:41.093 { 00:23:41.093 "method": "sock_set_default_impl", 00:23:41.093 "params": { 00:23:41.093 "impl_name": "posix" 00:23:41.093 } 00:23:41.093 }, 00:23:41.093 { 00:23:41.093 "method": "sock_impl_set_options", 00:23:41.093 "params": { 00:23:41.093 "impl_name": "ssl", 00:23:41.093 "recv_buf_size": 4096, 00:23:41.093 "send_buf_size": 4096, 00:23:41.093 "enable_recv_pipe": true, 00:23:41.093 "enable_quickack": false, 00:23:41.093 "enable_placement_id": 0, 00:23:41.093 "enable_zerocopy_send_server": true, 00:23:41.093 "enable_zerocopy_send_client": false, 00:23:41.093 "zerocopy_threshold": 0, 00:23:41.093 "tls_version": 0, 00:23:41.093 "enable_ktls": false 00:23:41.093 } 00:23:41.093 }, 00:23:41.093 { 00:23:41.093 "method": "sock_impl_set_options", 00:23:41.093 "params": { 00:23:41.093 "impl_name": "posix", 00:23:41.094 "recv_buf_size": 2097152, 00:23:41.094 "send_buf_size": 2097152, 00:23:41.094 "enable_recv_pipe": true, 00:23:41.094 "enable_quickack": false, 00:23:41.094 "enable_placement_id": 0, 00:23:41.094 "enable_zerocopy_send_server": true, 00:23:41.094 "enable_zerocopy_send_client": false, 00:23:41.094 "zerocopy_threshold": 0, 00:23:41.094 "tls_version": 0, 00:23:41.094 "enable_ktls": false 00:23:41.094 } 00:23:41.094 } 00:23:41.094 ] 00:23:41.094 }, 00:23:41.094 { 00:23:41.094 "subsystem": "vmd", 00:23:41.094 "config": [] 00:23:41.094 }, 00:23:41.094 { 00:23:41.094 "subsystem": "accel", 00:23:41.094 "config": [ 00:23:41.094 { 00:23:41.094 "method": "accel_set_options", 00:23:41.094 "params": { 00:23:41.094 "small_cache_size": 128, 00:23:41.094 "large_cache_size": 16, 00:23:41.094 "task_count": 2048, 00:23:41.094 "sequence_count": 2048, 00:23:41.094 "buf_count": 2048 00:23:41.094 } 00:23:41.094 } 00:23:41.094 ] 00:23:41.094 }, 00:23:41.094 { 00:23:41.094 "subsystem": "bdev", 00:23:41.094 "config": [ 00:23:41.094 { 00:23:41.094 "method": "bdev_set_options", 00:23:41.094 "params": { 00:23:41.094 "bdev_io_pool_size": 65535, 00:23:41.094 "bdev_io_cache_size": 256, 00:23:41.094 "bdev_auto_examine": true, 00:23:41.094 "iobuf_small_cache_size": 128, 00:23:41.094 "iobuf_large_cache_size": 16 00:23:41.094 } 00:23:41.094 }, 00:23:41.094 { 00:23:41.094 "method": "bdev_raid_set_options", 00:23:41.094 "params": { 00:23:41.094 "process_window_size_kb": 1024, 00:23:41.094 "process_max_bandwidth_mb_sec": 0 00:23:41.094 } 00:23:41.094 }, 00:23:41.094 { 00:23:41.094 "method": "bdev_iscsi_set_options", 00:23:41.094 "params": { 00:23:41.094 "timeout_sec": 30 00:23:41.094 } 00:23:41.094 }, 00:23:41.094 { 00:23:41.094 "method": "bdev_nvme_set_options", 00:23:41.094 "params": { 00:23:41.094 "action_on_timeout": "none", 00:23:41.094 "timeout_us": 0, 00:23:41.094 "timeout_admin_us": 0, 00:23:41.094 "keep_alive_timeout_ms": 10000, 00:23:41.094 "arbitration_burst": 0, 00:23:41.094 "low_priority_weight": 0, 00:23:41.094 "medium_priority_weight": 0, 00:23:41.094 "high_priority_weight": 0, 00:23:41.094 "nvme_adminq_poll_period_us": 10000, 00:23:41.094 "nvme_ioq_poll_period_us": 0, 00:23:41.094 "io_queue_requests": 512, 00:23:41.094 "delay_cmd_submit": true, 00:23:41.094 "transport_retry_count": 4, 00:23:41.094 "bdev_retry_count": 3, 00:23:41.094 "transport_ack_timeout": 0, 00:23:41.094 "ctrlr_loss_timeout_sec": 0, 00:23:41.094 "reconnect_delay_sec": 0, 00:23:41.094 "fast_io_fail_timeout_sec": 0, 00:23:41.094 "disable_auto_failback": false, 00:23:41.094 "generate_uuids": false, 00:23:41.094 "transport_tos": 0, 00:23:41.094 "nvme_error_stat": false, 00:23:41.094 "rdma_srq_size": 0, 00:23:41.094 "io_path_stat": false, 00:23:41.094 "allow_accel_sequence": false, 00:23:41.094 "rdma_max_cq_size": 0, 00:23:41.094 "rdma_cm_event_timeout_ms": 0, 00:23:41.094 "dhchap_digests": [ 00:23:41.094 "sha256", 00:23:41.094 "sha384", 00:23:41.094 "sha512" 00:23:41.094 ], 00:23:41.094 "dhchap_dhgroups": [ 00:23:41.094 "null", 00:23:41.094 "ffdhe2048", 00:23:41.094 "ffdhe3072", 00:23:41.094 "ffdhe4096", 00:23:41.094 "ffdhe6144", 00:23:41.094 "ffdhe8192" 00:23:41.094 ] 00:23:41.094 } 00:23:41.094 }, 00:23:41.094 { 00:23:41.094 "method": "bdev_nvme_attach_controller", 00:23:41.094 "params": { 00:23:41.094 "name": "TLSTEST", 00:23:41.094 "trtype": "TCP", 00:23:41.094 "adrfam": "IPv4", 00:23:41.094 "traddr": "10.0.0.2", 00:23:41.094 "trsvcid": "4420", 00:23:41.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.094 "prchk_reftag": false, 00:23:41.094 "prchk_guard": false, 00:23:41.094 "ctrlr_loss_timeout_sec": 0, 00:23:41.094 "reconnect_delay_sec": 0, 00:23:41.094 "fast_io_fail_timeout_sec": 0, 00:23:41.094 "psk": "key0", 00:23:41.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.094 "hdgst": false, 00:23:41.094 "ddgst": false, 00:23:41.094 "multipath": "multipath" 00:23:41.094 } 00:23:41.094 }, 00:23:41.094 { 00:23:41.094 "method": "bdev_nvme_set_hotplug", 00:23:41.094 "params": { 00:23:41.094 "period_us": 100000, 00:23:41.094 "enable": false 00:23:41.094 } 00:23:41.094 }, 00:23:41.094 { 00:23:41.094 "method": "bdev_wait_for_examine" 00:23:41.094 } 00:23:41.094 ] 00:23:41.094 }, 00:23:41.094 { 00:23:41.094 "subsystem": "nbd", 00:23:41.094 "config": [] 00:23:41.094 } 00:23:41.094 ] 00:23:41.094 }' 00:23:41.094 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.094 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.094 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.094 [2024-11-19 03:04:51.429084] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:41.094 [2024-11-19 03:04:51.429161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278422 ] 00:23:41.094 [2024-11-19 03:04:51.494279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.094 [2024-11-19 03:04:51.540029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.352 [2024-11-19 03:04:51.715996] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.352 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.352 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:41.352 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:41.352 Running I/O for 10 seconds... 00:23:43.654 2929.00 IOPS, 11.44 MiB/s [2024-11-19T02:04:55.200Z] 3037.00 IOPS, 11.86 MiB/s [2024-11-19T02:04:56.133Z] 3193.33 IOPS, 12.47 MiB/s [2024-11-19T02:04:57.066Z] 3263.50 IOPS, 12.75 MiB/s [2024-11-19T02:04:58.000Z] 3294.20 IOPS, 12.87 MiB/s [2024-11-19T02:04:59.373Z] 3313.83 IOPS, 12.94 MiB/s [2024-11-19T02:05:00.305Z] 3278.71 IOPS, 12.81 MiB/s [2024-11-19T02:05:01.239Z] 3295.50 IOPS, 12.87 MiB/s [2024-11-19T02:05:02.172Z] 3313.44 IOPS, 12.94 MiB/s [2024-11-19T02:05:02.172Z] 3316.20 IOPS, 12.95 MiB/s 00:23:51.557 Latency(us) 00:23:51.557 [2024-11-19T02:05:02.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.557 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:51.557 Verification LBA range: start 0x0 length 0x2000 00:23:51.557 TLSTESTn1 : 10.02 3322.89 12.98 0.00 0.00 38461.82 6092.42 72235.24 00:23:51.557 [2024-11-19T02:05:02.172Z] =================================================================================================================== 00:23:51.557 [2024-11-19T02:05:02.172Z] Total : 3322.89 12.98 0.00 0.00 38461.82 6092.42 72235.24 00:23:51.557 { 00:23:51.557 "results": [ 00:23:51.557 { 00:23:51.557 "job": "TLSTESTn1", 00:23:51.557 "core_mask": "0x4", 00:23:51.557 "workload": "verify", 00:23:51.557 "status": "finished", 00:23:51.557 "verify_range": { 00:23:51.557 "start": 0, 00:23:51.557 "length": 8192 00:23:51.557 }, 00:23:51.557 "queue_depth": 128, 00:23:51.557 "io_size": 4096, 00:23:51.557 "runtime": 10.018077, 00:23:51.557 "iops": 3322.8932059516014, 00:23:51.557 "mibps": 12.980051585748443, 00:23:51.557 "io_failed": 0, 00:23:51.557 "io_timeout": 0, 00:23:51.557 "avg_latency_us": 38461.81911115116, 00:23:51.557 "min_latency_us": 6092.420740740741, 00:23:51.557 "max_latency_us": 72235.23555555556 00:23:51.557 } 00:23:51.557 ], 00:23:51.557 "core_count": 1 00:23:51.557 } 00:23:51.557 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.557 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 278422 00:23:51.557 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 278422 ']' 00:23:51.557 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 278422 00:23:51.557 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:51.557 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.557 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278422 00:23:51.557 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:51.557 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:51.557 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278422' 00:23:51.557 killing process with pid 278422 00:23:51.557 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 278422 00:23:51.557 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.557 00:23:51.557 Latency(us) 00:23:51.557 [2024-11-19T02:05:02.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.557 [2024-11-19T02:05:02.172Z] =================================================================================================================== 00:23:51.557 [2024-11-19T02:05:02.172Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.557 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 278422 00:23:51.814 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 278272 00:23:51.814 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 278272 ']' 00:23:51.814 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 278272 00:23:51.814 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:51.814 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.814 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278272 00:23:51.814 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:51.814 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:51.814 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278272' 00:23:51.814 killing process with pid 278272 00:23:51.814 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 278272 00:23:51.814 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 278272 00:23:52.072 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:52.072 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.072 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.072 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.072 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=279741 00:23:52.072 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:52.072 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 279741 00:23:52.072 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 279741 ']' 00:23:52.072 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.072 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.072 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.072 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.072 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.072 [2024-11-19 03:05:02.565760] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:52.072 [2024-11-19 03:05:02.565853] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.072 [2024-11-19 03:05:02.633697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.072 [2024-11-19 03:05:02.678467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.073 [2024-11-19 03:05:02.678538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.073 [2024-11-19 03:05:02.678574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.073 [2024-11-19 03:05:02.678586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.073 [2024-11-19 03:05:02.678596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.073 [2024-11-19 03:05:02.679198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.331 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.331 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:52.331 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.331 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.331 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.331 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.331 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.PF0f2EVTX1 00:23:52.331 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PF0f2EVTX1 00:23:52.331 03:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:52.589 [2024-11-19 03:05:03.119783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.589 03:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:52.847 03:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:53.104 [2024-11-19 03:05:03.657190] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.104 [2024-11-19 03:05:03.657418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.104 03:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:53.671 malloc0 00:23:53.671 03:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:53.929 03:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PF0f2EVTX1 00:23:54.187 03:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:54.445 03:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=280032 00:23:54.445 03:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:54.445 03:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.445 03:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 280032 /var/tmp/bdevperf.sock 00:23:54.445 03:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280032 ']' 00:23:54.445 03:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.445 03:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.445 03:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.445 03:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.445 03:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.445 [2024-11-19 03:05:04.890971] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:54.445 [2024-11-19 03:05:04.891056] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280032 ] 00:23:54.445 [2024-11-19 03:05:04.956982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.445 [2024-11-19 03:05:05.003614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.703 03:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.703 03:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:54.703 03:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PF0f2EVTX1 00:23:54.961 03:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:55.219 [2024-11-19 03:05:05.665485] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.219 nvme0n1 00:23:55.219 03:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:55.476 Running I/O for 1 seconds... 00:23:56.410 3300.00 IOPS, 12.89 MiB/s 00:23:56.410 Latency(us) 00:23:56.410 [2024-11-19T02:05:07.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.410 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:56.410 Verification LBA range: start 0x0 length 0x2000 00:23:56.410 nvme0n1 : 1.02 3364.48 13.14 0.00 0.00 37719.37 6796.33 48545.19 00:23:56.410 [2024-11-19T02:05:07.025Z] =================================================================================================================== 00:23:56.410 [2024-11-19T02:05:07.025Z] Total : 3364.48 13.14 0.00 0.00 37719.37 6796.33 48545.19 00:23:56.410 { 00:23:56.410 "results": [ 00:23:56.410 { 00:23:56.410 "job": "nvme0n1", 00:23:56.410 "core_mask": "0x2", 00:23:56.410 "workload": "verify", 00:23:56.410 "status": "finished", 00:23:56.410 "verify_range": { 00:23:56.410 "start": 0, 00:23:56.410 "length": 8192 00:23:56.410 }, 00:23:56.410 "queue_depth": 128, 00:23:56.410 "io_size": 4096, 00:23:56.410 "runtime": 1.019178, 00:23:56.410 "iops": 3364.476077780329, 00:23:56.410 "mibps": 13.14248467882941, 00:23:56.410 "io_failed": 0, 00:23:56.410 "io_timeout": 0, 00:23:56.410 "avg_latency_us": 37719.368215763156, 00:23:56.410 "min_latency_us": 6796.325925925926, 00:23:56.410 "max_latency_us": 48545.18518518518 00:23:56.410 } 00:23:56.410 ], 00:23:56.410 "core_count": 1 00:23:56.410 } 00:23:56.410 03:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 280032 00:23:56.410 03:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280032 ']' 00:23:56.410 03:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280032 00:23:56.410 03:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:56.410 03:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.410 03:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280032 00:23:56.410 03:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:56.410 03:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:56.410 03:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280032' 00:23:56.410 killing process with pid 280032 00:23:56.410 03:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280032 00:23:56.410 Received shutdown signal, test time was about 1.000000 seconds 00:23:56.410 00:23:56.410 Latency(us) 00:23:56.410 [2024-11-19T02:05:07.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.410 [2024-11-19T02:05:07.025Z] =================================================================================================================== 00:23:56.410 [2024-11-19T02:05:07.025Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.410 03:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280032 00:23:56.668 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 279741 00:23:56.668 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 279741 ']' 00:23:56.668 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 279741 00:23:56.669 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:56.669 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.669 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279741 00:23:56.669 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:56.669 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:56.669 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279741' 00:23:56.669 killing process with pid 279741 00:23:56.669 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 279741 00:23:56.669 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 279741 00:23:56.927 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:56.927 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:56.927 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:56.927 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.927 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=280314 00:23:56.927 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:56.927 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 280314 00:23:56.927 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280314 ']' 00:23:56.927 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.927 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.927 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.927 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.927 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.927 [2024-11-19 03:05:07.433683] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:56.927 [2024-11-19 03:05:07.433781] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.927 [2024-11-19 03:05:07.503667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.927 [2024-11-19 03:05:07.543811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.927 [2024-11-19 03:05:07.543874] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.927 [2024-11-19 03:05:07.543888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.927 [2024-11-19 03:05:07.543900] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.927 [2024-11-19 03:05:07.543909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.927 [2024-11-19 03:05:07.544480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.185 [2024-11-19 03:05:07.682210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.185 malloc0 00:23:57.185 [2024-11-19 03:05:07.713713] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:57.185 [2024-11-19 03:05:07.714002] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=280338 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 280338 /var/tmp/bdevperf.sock 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280338 ']' 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.185 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.185 [2024-11-19 03:05:07.783974] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:57.185 [2024-11-19 03:05:07.784050] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280338 ] 00:23:57.444 [2024-11-19 03:05:07.852192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.444 [2024-11-19 03:05:07.898713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.444 03:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.444 03:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:57.444 03:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PF0f2EVTX1 00:23:57.701 03:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:57.959 [2024-11-19 03:05:08.541181] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.218 nvme0n1 00:23:58.218 03:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:58.218 Running I/O for 1 seconds... 00:23:59.408 2926.00 IOPS, 11.43 MiB/s 00:23:59.408 Latency(us) 00:23:59.408 [2024-11-19T02:05:10.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.408 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:59.408 Verification LBA range: start 0x0 length 0x2000 00:23:59.408 nvme0n1 : 1.02 2983.81 11.66 0.00 0.00 42453.08 7573.05 37865.24 00:23:59.408 [2024-11-19T02:05:10.023Z] =================================================================================================================== 00:23:59.408 [2024-11-19T02:05:10.023Z] Total : 2983.81 11.66 0.00 0.00 42453.08 7573.05 37865.24 00:23:59.408 { 00:23:59.408 "results": [ 00:23:59.408 { 00:23:59.408 "job": "nvme0n1", 00:23:59.408 "core_mask": "0x2", 00:23:59.408 "workload": "verify", 00:23:59.408 "status": "finished", 00:23:59.408 "verify_range": { 00:23:59.408 "start": 0, 00:23:59.408 "length": 8192 00:23:59.408 }, 00:23:59.408 "queue_depth": 128, 00:23:59.408 "io_size": 4096, 00:23:59.408 "runtime": 1.023524, 00:23:59.408 "iops": 2983.80887990902, 00:23:59.408 "mibps": 11.65550343714461, 00:23:59.408 "io_failed": 0, 00:23:59.408 "io_timeout": 0, 00:23:59.408 "avg_latency_us": 42453.08235659366, 00:23:59.408 "min_latency_us": 7573.0488888888885, 00:23:59.408 "max_latency_us": 37865.24444444444 00:23:59.408 } 00:23:59.408 ], 00:23:59.408 "core_count": 1 00:23:59.408 } 00:23:59.408 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:59.408 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.408 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.408 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.408 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:59.408 "subsystems": [ 00:23:59.408 { 00:23:59.408 "subsystem": "keyring", 00:23:59.408 "config": [ 00:23:59.408 { 00:23:59.408 "method": "keyring_file_add_key", 00:23:59.408 "params": { 00:23:59.408 "name": "key0", 00:23:59.408 "path": "/tmp/tmp.PF0f2EVTX1" 00:23:59.408 } 00:23:59.408 } 00:23:59.408 ] 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "subsystem": "iobuf", 00:23:59.408 "config": [ 00:23:59.408 { 00:23:59.408 "method": "iobuf_set_options", 00:23:59.408 "params": { 00:23:59.408 "small_pool_count": 8192, 00:23:59.408 "large_pool_count": 1024, 00:23:59.408 "small_bufsize": 8192, 00:23:59.408 "large_bufsize": 135168, 00:23:59.408 "enable_numa": false 00:23:59.408 } 00:23:59.408 } 00:23:59.408 ] 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "subsystem": "sock", 00:23:59.409 "config": [ 00:23:59.409 { 00:23:59.409 "method": "sock_set_default_impl", 00:23:59.409 "params": { 00:23:59.409 "impl_name": "posix" 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "sock_impl_set_options", 00:23:59.409 "params": { 00:23:59.409 "impl_name": "ssl", 00:23:59.409 "recv_buf_size": 4096, 00:23:59.409 "send_buf_size": 4096, 00:23:59.409 "enable_recv_pipe": true, 00:23:59.409 "enable_quickack": false, 00:23:59.409 "enable_placement_id": 0, 00:23:59.409 "enable_zerocopy_send_server": true, 00:23:59.409 "enable_zerocopy_send_client": false, 00:23:59.409 "zerocopy_threshold": 0, 00:23:59.409 "tls_version": 0, 00:23:59.409 "enable_ktls": false 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "sock_impl_set_options", 00:23:59.409 "params": { 00:23:59.409 "impl_name": "posix", 00:23:59.409 "recv_buf_size": 2097152, 00:23:59.409 "send_buf_size": 2097152, 00:23:59.409 "enable_recv_pipe": true, 00:23:59.409 "enable_quickack": false, 00:23:59.409 "enable_placement_id": 0, 00:23:59.409 "enable_zerocopy_send_server": true, 00:23:59.409 "enable_zerocopy_send_client": false, 00:23:59.409 "zerocopy_threshold": 0, 00:23:59.409 "tls_version": 0, 00:23:59.409 "enable_ktls": false 00:23:59.409 } 00:23:59.409 } 00:23:59.409 ] 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "subsystem": "vmd", 00:23:59.409 "config": [] 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "subsystem": "accel", 00:23:59.409 "config": [ 00:23:59.409 { 00:23:59.409 "method": "accel_set_options", 00:23:59.409 "params": { 00:23:59.409 "small_cache_size": 128, 00:23:59.409 "large_cache_size": 16, 00:23:59.409 "task_count": 2048, 00:23:59.409 "sequence_count": 2048, 00:23:59.409 "buf_count": 2048 00:23:59.409 } 00:23:59.409 } 00:23:59.409 ] 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "subsystem": "bdev", 00:23:59.409 "config": [ 00:23:59.409 { 00:23:59.409 "method": "bdev_set_options", 00:23:59.409 "params": { 00:23:59.409 "bdev_io_pool_size": 65535, 00:23:59.409 "bdev_io_cache_size": 256, 00:23:59.409 "bdev_auto_examine": true, 00:23:59.409 "iobuf_small_cache_size": 128, 00:23:59.409 "iobuf_large_cache_size": 16 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "bdev_raid_set_options", 00:23:59.409 "params": { 00:23:59.409 "process_window_size_kb": 1024, 00:23:59.409 "process_max_bandwidth_mb_sec": 0 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "bdev_iscsi_set_options", 00:23:59.409 "params": { 00:23:59.409 "timeout_sec": 30 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "bdev_nvme_set_options", 00:23:59.409 "params": { 00:23:59.409 "action_on_timeout": "none", 00:23:59.409 "timeout_us": 0, 00:23:59.409 "timeout_admin_us": 0, 00:23:59.409 "keep_alive_timeout_ms": 10000, 00:23:59.409 "arbitration_burst": 0, 00:23:59.409 "low_priority_weight": 0, 00:23:59.409 "medium_priority_weight": 0, 00:23:59.409 "high_priority_weight": 0, 00:23:59.409 "nvme_adminq_poll_period_us": 10000, 00:23:59.409 "nvme_ioq_poll_period_us": 0, 00:23:59.409 "io_queue_requests": 0, 00:23:59.409 "delay_cmd_submit": true, 00:23:59.409 "transport_retry_count": 4, 00:23:59.409 "bdev_retry_count": 3, 00:23:59.409 "transport_ack_timeout": 0, 00:23:59.409 "ctrlr_loss_timeout_sec": 0, 00:23:59.409 "reconnect_delay_sec": 0, 00:23:59.409 "fast_io_fail_timeout_sec": 0, 00:23:59.409 "disable_auto_failback": false, 00:23:59.409 "generate_uuids": false, 00:23:59.409 "transport_tos": 0, 00:23:59.409 "nvme_error_stat": false, 00:23:59.409 "rdma_srq_size": 0, 00:23:59.409 "io_path_stat": false, 00:23:59.409 "allow_accel_sequence": false, 00:23:59.409 "rdma_max_cq_size": 0, 00:23:59.409 "rdma_cm_event_timeout_ms": 0, 00:23:59.409 "dhchap_digests": [ 00:23:59.409 "sha256", 00:23:59.409 "sha384", 00:23:59.409 "sha512" 00:23:59.409 ], 00:23:59.409 "dhchap_dhgroups": [ 00:23:59.409 "null", 00:23:59.409 "ffdhe2048", 00:23:59.409 "ffdhe3072", 00:23:59.409 "ffdhe4096", 00:23:59.409 "ffdhe6144", 00:23:59.409 "ffdhe8192" 00:23:59.409 ] 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "bdev_nvme_set_hotplug", 00:23:59.409 "params": { 00:23:59.409 "period_us": 100000, 00:23:59.409 "enable": false 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "bdev_malloc_create", 00:23:59.409 "params": { 00:23:59.409 "name": "malloc0", 00:23:59.409 "num_blocks": 8192, 00:23:59.409 "block_size": 4096, 00:23:59.409 "physical_block_size": 4096, 00:23:59.409 "uuid": "7d5606dd-226b-4b69-9c23-6affe70360e8", 00:23:59.409 "optimal_io_boundary": 0, 00:23:59.409 "md_size": 0, 00:23:59.409 "dif_type": 0, 00:23:59.409 "dif_is_head_of_md": false, 00:23:59.409 "dif_pi_format": 0 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "bdev_wait_for_examine" 00:23:59.409 } 00:23:59.409 ] 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "subsystem": "nbd", 00:23:59.409 "config": [] 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "subsystem": "scheduler", 00:23:59.409 "config": [ 00:23:59.409 { 00:23:59.409 "method": "framework_set_scheduler", 00:23:59.409 "params": { 00:23:59.409 "name": "static" 00:23:59.409 } 00:23:59.409 } 00:23:59.409 ] 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "subsystem": "nvmf", 00:23:59.409 "config": [ 00:23:59.409 { 00:23:59.409 "method": "nvmf_set_config", 00:23:59.409 "params": { 00:23:59.409 "discovery_filter": "match_any", 00:23:59.409 "admin_cmd_passthru": { 00:23:59.409 "identify_ctrlr": false 00:23:59.409 }, 00:23:59.409 "dhchap_digests": [ 00:23:59.409 "sha256", 00:23:59.409 "sha384", 00:23:59.409 "sha512" 00:23:59.409 ], 00:23:59.409 "dhchap_dhgroups": [ 00:23:59.409 "null", 00:23:59.409 "ffdhe2048", 00:23:59.409 "ffdhe3072", 00:23:59.409 "ffdhe4096", 00:23:59.409 "ffdhe6144", 00:23:59.409 "ffdhe8192" 00:23:59.409 ] 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "nvmf_set_max_subsystems", 00:23:59.409 "params": { 00:23:59.409 "max_subsystems": 1024 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "nvmf_set_crdt", 00:23:59.409 "params": { 00:23:59.409 "crdt1": 0, 00:23:59.409 "crdt2": 0, 00:23:59.409 "crdt3": 0 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "nvmf_create_transport", 00:23:59.409 "params": { 00:23:59.409 "trtype": "TCP", 00:23:59.409 "max_queue_depth": 128, 00:23:59.409 "max_io_qpairs_per_ctrlr": 127, 00:23:59.409 "in_capsule_data_size": 4096, 00:23:59.409 "max_io_size": 131072, 00:23:59.409 "io_unit_size": 131072, 00:23:59.409 "max_aq_depth": 128, 00:23:59.409 "num_shared_buffers": 511, 00:23:59.409 "buf_cache_size": 4294967295, 00:23:59.409 "dif_insert_or_strip": false, 00:23:59.409 "zcopy": false, 00:23:59.409 "c2h_success": false, 00:23:59.409 "sock_priority": 0, 00:23:59.409 "abort_timeout_sec": 1, 00:23:59.409 "ack_timeout": 0, 00:23:59.409 "data_wr_pool_size": 0 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "nvmf_create_subsystem", 00:23:59.409 "params": { 00:23:59.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.409 "allow_any_host": false, 00:23:59.409 "serial_number": "00000000000000000000", 00:23:59.409 "model_number": "SPDK bdev Controller", 00:23:59.409 "max_namespaces": 32, 00:23:59.409 "min_cntlid": 1, 00:23:59.409 "max_cntlid": 65519, 00:23:59.409 "ana_reporting": false 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "nvmf_subsystem_add_host", 00:23:59.409 "params": { 00:23:59.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.409 "host": "nqn.2016-06.io.spdk:host1", 00:23:59.409 "psk": "key0" 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "nvmf_subsystem_add_ns", 00:23:59.409 "params": { 00:23:59.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.409 "namespace": { 00:23:59.409 "nsid": 1, 00:23:59.409 "bdev_name": "malloc0", 00:23:59.409 "nguid": "7D5606DD226B4B699C236AFFE70360E8", 00:23:59.409 "uuid": "7d5606dd-226b-4b69-9c23-6affe70360e8", 00:23:59.409 "no_auto_visible": false 00:23:59.409 } 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "nvmf_subsystem_add_listener", 00:23:59.409 "params": { 00:23:59.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.409 "listen_address": { 00:23:59.409 "trtype": "TCP", 00:23:59.409 "adrfam": "IPv4", 00:23:59.409 "traddr": "10.0.0.2", 00:23:59.409 "trsvcid": "4420" 00:23:59.409 }, 00:23:59.410 "secure_channel": false, 00:23:59.410 "sock_impl": "ssl" 00:23:59.410 } 00:23:59.410 } 00:23:59.410 ] 00:23:59.410 } 00:23:59.410 ] 00:23:59.410 }' 00:23:59.410 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:59.668 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:59.668 "subsystems": [ 00:23:59.668 { 00:23:59.668 "subsystem": "keyring", 00:23:59.668 "config": [ 00:23:59.668 { 00:23:59.668 "method": "keyring_file_add_key", 00:23:59.668 "params": { 00:23:59.668 "name": "key0", 00:23:59.668 "path": "/tmp/tmp.PF0f2EVTX1" 00:23:59.668 } 00:23:59.668 } 00:23:59.668 ] 00:23:59.668 }, 00:23:59.668 { 00:23:59.668 "subsystem": "iobuf", 00:23:59.668 "config": [ 00:23:59.668 { 00:23:59.668 "method": "iobuf_set_options", 00:23:59.668 "params": { 00:23:59.668 "small_pool_count": 8192, 00:23:59.668 "large_pool_count": 1024, 00:23:59.668 "small_bufsize": 8192, 00:23:59.668 "large_bufsize": 135168, 00:23:59.668 "enable_numa": false 00:23:59.668 } 00:23:59.668 } 00:23:59.668 ] 00:23:59.668 }, 00:23:59.668 { 00:23:59.668 "subsystem": "sock", 00:23:59.668 "config": [ 00:23:59.668 { 00:23:59.668 "method": "sock_set_default_impl", 00:23:59.668 "params": { 00:23:59.668 "impl_name": "posix" 00:23:59.668 } 00:23:59.668 }, 00:23:59.668 { 00:23:59.668 "method": "sock_impl_set_options", 00:23:59.668 "params": { 00:23:59.668 "impl_name": "ssl", 00:23:59.668 "recv_buf_size": 4096, 00:23:59.668 "send_buf_size": 4096, 00:23:59.668 "enable_recv_pipe": true, 00:23:59.668 "enable_quickack": false, 00:23:59.668 "enable_placement_id": 0, 00:23:59.668 "enable_zerocopy_send_server": true, 00:23:59.668 "enable_zerocopy_send_client": false, 00:23:59.668 "zerocopy_threshold": 0, 00:23:59.668 "tls_version": 0, 00:23:59.668 "enable_ktls": false 00:23:59.668 } 00:23:59.668 }, 00:23:59.668 { 00:23:59.668 "method": "sock_impl_set_options", 00:23:59.668 "params": { 00:23:59.668 "impl_name": "posix", 00:23:59.668 "recv_buf_size": 2097152, 00:23:59.668 "send_buf_size": 2097152, 00:23:59.668 "enable_recv_pipe": true, 00:23:59.668 "enable_quickack": false, 00:23:59.668 "enable_placement_id": 0, 00:23:59.668 "enable_zerocopy_send_server": true, 00:23:59.668 "enable_zerocopy_send_client": false, 00:23:59.668 "zerocopy_threshold": 0, 00:23:59.668 "tls_version": 0, 00:23:59.668 "enable_ktls": false 00:23:59.668 } 00:23:59.668 } 00:23:59.668 ] 00:23:59.668 }, 00:23:59.668 { 00:23:59.668 "subsystem": "vmd", 00:23:59.668 "config": [] 00:23:59.668 }, 00:23:59.668 { 00:23:59.668 "subsystem": "accel", 00:23:59.668 "config": [ 00:23:59.668 { 00:23:59.668 "method": "accel_set_options", 00:23:59.668 "params": { 00:23:59.668 "small_cache_size": 128, 00:23:59.668 "large_cache_size": 16, 00:23:59.668 "task_count": 2048, 00:23:59.668 "sequence_count": 2048, 00:23:59.668 "buf_count": 2048 00:23:59.668 } 00:23:59.668 } 00:23:59.668 ] 00:23:59.668 }, 00:23:59.668 { 00:23:59.668 "subsystem": "bdev", 00:23:59.668 "config": [ 00:23:59.668 { 00:23:59.668 "method": "bdev_set_options", 00:23:59.668 "params": { 00:23:59.668 "bdev_io_pool_size": 65535, 00:23:59.668 "bdev_io_cache_size": 256, 00:23:59.668 "bdev_auto_examine": true, 00:23:59.668 "iobuf_small_cache_size": 128, 00:23:59.668 "iobuf_large_cache_size": 16 00:23:59.668 } 00:23:59.668 }, 00:23:59.668 { 00:23:59.668 "method": "bdev_raid_set_options", 00:23:59.668 "params": { 00:23:59.668 "process_window_size_kb": 1024, 00:23:59.668 "process_max_bandwidth_mb_sec": 0 00:23:59.668 } 00:23:59.668 }, 00:23:59.668 { 00:23:59.668 "method": "bdev_iscsi_set_options", 00:23:59.668 "params": { 00:23:59.668 "timeout_sec": 30 00:23:59.668 } 00:23:59.668 }, 00:23:59.668 { 00:23:59.668 "method": "bdev_nvme_set_options", 00:23:59.668 "params": { 00:23:59.668 "action_on_timeout": "none", 00:23:59.668 "timeout_us": 0, 00:23:59.668 "timeout_admin_us": 0, 00:23:59.668 "keep_alive_timeout_ms": 10000, 00:23:59.668 "arbitration_burst": 0, 00:23:59.668 "low_priority_weight": 0, 00:23:59.668 "medium_priority_weight": 0, 00:23:59.668 "high_priority_weight": 0, 00:23:59.668 "nvme_adminq_poll_period_us": 10000, 00:23:59.668 "nvme_ioq_poll_period_us": 0, 00:23:59.668 "io_queue_requests": 512, 00:23:59.668 "delay_cmd_submit": true, 00:23:59.668 "transport_retry_count": 4, 00:23:59.668 "bdev_retry_count": 3, 00:23:59.668 "transport_ack_timeout": 0, 00:23:59.668 "ctrlr_loss_timeout_sec": 0, 00:23:59.668 "reconnect_delay_sec": 0, 00:23:59.669 "fast_io_fail_timeout_sec": 0, 00:23:59.669 "disable_auto_failback": false, 00:23:59.669 "generate_uuids": false, 00:23:59.669 "transport_tos": 0, 00:23:59.669 "nvme_error_stat": false, 00:23:59.669 "rdma_srq_size": 0, 00:23:59.669 "io_path_stat": false, 00:23:59.669 "allow_accel_sequence": false, 00:23:59.669 "rdma_max_cq_size": 0, 00:23:59.669 "rdma_cm_event_timeout_ms": 0, 00:23:59.669 "dhchap_digests": [ 00:23:59.669 "sha256", 00:23:59.669 "sha384", 00:23:59.669 "sha512" 00:23:59.669 ], 00:23:59.669 "dhchap_dhgroups": [ 00:23:59.669 "null", 00:23:59.669 "ffdhe2048", 00:23:59.669 "ffdhe3072", 00:23:59.669 "ffdhe4096", 00:23:59.669 "ffdhe6144", 00:23:59.669 "ffdhe8192" 00:23:59.669 ] 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "bdev_nvme_attach_controller", 00:23:59.669 "params": { 00:23:59.669 "name": "nvme0", 00:23:59.669 "trtype": "TCP", 00:23:59.669 "adrfam": "IPv4", 00:23:59.669 "traddr": "10.0.0.2", 00:23:59.669 "trsvcid": "4420", 00:23:59.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.669 "prchk_reftag": false, 00:23:59.669 "prchk_guard": false, 00:23:59.669 "ctrlr_loss_timeout_sec": 0, 00:23:59.669 "reconnect_delay_sec": 0, 00:23:59.669 "fast_io_fail_timeout_sec": 0, 00:23:59.669 "psk": "key0", 00:23:59.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.669 "hdgst": false, 00:23:59.669 "ddgst": false, 00:23:59.669 "multipath": "multipath" 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "bdev_nvme_set_hotplug", 00:23:59.669 "params": { 00:23:59.669 "period_us": 100000, 00:23:59.669 "enable": false 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "bdev_enable_histogram", 00:23:59.669 "params": { 00:23:59.669 "name": "nvme0n1", 00:23:59.669 "enable": true 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "bdev_wait_for_examine" 00:23:59.669 } 00:23:59.669 ] 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "subsystem": "nbd", 00:23:59.669 "config": [] 00:23:59.669 } 00:23:59.669 ] 00:23:59.669 }' 00:23:59.669 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 280338 00:23:59.669 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280338 ']' 00:23:59.669 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280338 00:23:59.669 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:59.669 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.669 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280338 00:23:59.927 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:59.927 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:59.927 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280338' 00:23:59.927 killing process with pid 280338 00:23:59.927 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280338 00:23:59.927 Received shutdown signal, test time was about 1.000000 seconds 00:23:59.927 00:23:59.927 Latency(us) 00:23:59.927 [2024-11-19T02:05:10.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.927 [2024-11-19T02:05:10.542Z] =================================================================================================================== 00:23:59.927 [2024-11-19T02:05:10.542Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:59.927 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280338 00:23:59.927 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 280314 00:23:59.927 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280314 ']' 00:23:59.927 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280314 00:23:59.927 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:59.927 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.927 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280314 00:23:59.927 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:59.927 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:59.927 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280314' 00:23:59.927 killing process with pid 280314 00:23:59.927 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280314 00:23:59.927 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280314 00:24:00.187 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:00.187 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:00.187 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:00.187 "subsystems": [ 00:24:00.187 { 00:24:00.187 "subsystem": "keyring", 00:24:00.187 "config": [ 00:24:00.187 { 00:24:00.187 "method": "keyring_file_add_key", 00:24:00.187 "params": { 00:24:00.187 "name": "key0", 00:24:00.187 "path": "/tmp/tmp.PF0f2EVTX1" 00:24:00.187 } 00:24:00.187 } 00:24:00.187 ] 00:24:00.187 }, 00:24:00.187 { 00:24:00.187 "subsystem": "iobuf", 00:24:00.187 "config": [ 00:24:00.187 { 00:24:00.187 "method": "iobuf_set_options", 00:24:00.187 "params": { 00:24:00.187 "small_pool_count": 8192, 00:24:00.187 "large_pool_count": 1024, 00:24:00.187 "small_bufsize": 8192, 00:24:00.187 "large_bufsize": 135168, 00:24:00.187 "enable_numa": false 00:24:00.187 } 00:24:00.187 } 00:24:00.187 ] 00:24:00.187 }, 00:24:00.187 { 00:24:00.187 "subsystem": "sock", 00:24:00.187 "config": [ 00:24:00.187 { 00:24:00.187 "method": "sock_set_default_impl", 00:24:00.187 "params": { 00:24:00.187 "impl_name": "posix" 00:24:00.187 } 00:24:00.187 }, 00:24:00.187 { 00:24:00.187 "method": "sock_impl_set_options", 00:24:00.187 "params": { 00:24:00.187 "impl_name": "ssl", 00:24:00.187 "recv_buf_size": 4096, 00:24:00.187 "send_buf_size": 4096, 00:24:00.187 "enable_recv_pipe": true, 00:24:00.187 "enable_quickack": false, 00:24:00.187 "enable_placement_id": 0, 00:24:00.187 "enable_zerocopy_send_server": true, 00:24:00.187 "enable_zerocopy_send_client": false, 00:24:00.187 "zerocopy_threshold": 0, 00:24:00.187 "tls_version": 0, 00:24:00.187 "enable_ktls": false 00:24:00.187 } 00:24:00.187 }, 00:24:00.187 { 00:24:00.187 "method": "sock_impl_set_options", 00:24:00.187 "params": { 00:24:00.187 "impl_name": "posix", 00:24:00.187 "recv_buf_size": 2097152, 00:24:00.187 "send_buf_size": 2097152, 00:24:00.187 "enable_recv_pipe": true, 00:24:00.187 "enable_quickack": false, 00:24:00.187 "enable_placement_id": 0, 00:24:00.187 "enable_zerocopy_send_server": true, 00:24:00.187 "enable_zerocopy_send_client": false, 00:24:00.187 "zerocopy_threshold": 0, 00:24:00.187 "tls_version": 0, 00:24:00.187 "enable_ktls": false 00:24:00.187 } 00:24:00.187 } 00:24:00.187 ] 00:24:00.187 }, 00:24:00.187 { 00:24:00.187 "subsystem": "vmd", 00:24:00.187 "config": [] 00:24:00.187 }, 00:24:00.187 { 00:24:00.187 "subsystem": "accel", 00:24:00.187 "config": [ 00:24:00.187 { 00:24:00.187 "method": "accel_set_options", 00:24:00.187 "params": { 00:24:00.187 "small_cache_size": 128, 00:24:00.187 "large_cache_size": 16, 00:24:00.188 "task_count": 2048, 00:24:00.188 "sequence_count": 2048, 00:24:00.188 "buf_count": 2048 00:24:00.188 } 00:24:00.188 } 00:24:00.188 ] 00:24:00.188 }, 00:24:00.188 { 00:24:00.188 "subsystem": "bdev", 00:24:00.188 "config": [ 00:24:00.188 { 00:24:00.188 "method": "bdev_set_options", 00:24:00.188 "params": { 00:24:00.188 "bdev_io_pool_size": 65535, 00:24:00.188 "bdev_io_cache_size": 256, 00:24:00.188 "bdev_auto_examine": true, 00:24:00.188 "iobuf_small_cache_size": 128, 00:24:00.188 "iobuf_large_cache_size": 16 00:24:00.188 } 00:24:00.188 }, 00:24:00.188 { 00:24:00.188 "method": "bdev_raid_set_options", 00:24:00.188 "params": { 00:24:00.188 "process_window_size_kb": 1024, 00:24:00.188 "process_max_bandwidth_mb_sec": 0 00:24:00.188 } 00:24:00.188 }, 00:24:00.188 { 00:24:00.188 "method": "bdev_iscsi_set_options", 00:24:00.188 "params": { 00:24:00.188 "timeout_sec": 30 00:24:00.188 } 00:24:00.188 }, 00:24:00.188 { 00:24:00.188 "method": "bdev_nvme_set_options", 00:24:00.188 "params": { 00:24:00.188 "action_on_timeout": "none", 00:24:00.188 "timeout_us": 0, 00:24:00.188 "timeout_admin_us": 0, 00:24:00.188 "keep_alive_timeout_ms": 10000, 00:24:00.188 "arbitration_burst": 0, 00:24:00.188 "low_priority_weight": 0, 00:24:00.188 "medium_priority_weight": 0, 00:24:00.188 "high_priority_weight": 0, 00:24:00.188 "nvme_adminq_poll_period_us": 10000, 00:24:00.188 "nvme_ioq_poll_period_us": 0, 00:24:00.188 "io_queue_requests": 0, 00:24:00.188 "delay_cmd_submit": true, 00:24:00.188 "transport_retry_count": 4, 00:24:00.188 "bdev_retry_count": 3, 00:24:00.188 "transport_ack_timeout": 0, 00:24:00.188 "ctrlr_loss_timeout_sec": 0, 00:24:00.188 "reconnect_delay_sec": 0, 00:24:00.188 "fast_io_fail_timeout_sec": 0, 00:24:00.188 "disable_auto_failback": false, 00:24:00.188 "generate_uuids": false, 00:24:00.188 "transport_tos": 0, 00:24:00.188 "nvme_error_stat": false, 00:24:00.188 "rdma_srq_size": 0, 00:24:00.188 "io_path_stat": false, 00:24:00.188 "allow_accel_sequence": false, 00:24:00.188 "rdma_max_cq_size": 0, 00:24:00.188 "rdma_cm_event_timeout_ms": 0, 00:24:00.188 "dhchap_digests": [ 00:24:00.188 "sha256", 00:24:00.188 "sha384", 00:24:00.188 "sha512" 00:24:00.188 ], 00:24:00.188 "dhchap_dhgroups": [ 00:24:00.188 "null", 00:24:00.188 "ffdhe2048", 00:24:00.188 "ffdhe3072", 00:24:00.188 "ffdhe4096", 00:24:00.188 "ffdhe6144", 00:24:00.188 "ffdhe8192" 00:24:00.188 ] 00:24:00.188 } 00:24:00.188 }, 00:24:00.188 { 00:24:00.188 "method": "bdev_nvme_set_hotplug", 00:24:00.188 "params": { 00:24:00.188 "period_us": 100000, 00:24:00.188 "enable": false 00:24:00.188 } 00:24:00.188 }, 00:24:00.188 { 00:24:00.188 "method": "bdev_malloc_create", 00:24:00.188 "params": { 00:24:00.188 "name": "malloc0", 00:24:00.188 "num_blocks": 8192, 00:24:00.188 "block_size": 4096, 00:24:00.188 "physical_block_size": 4096, 00:24:00.188 "uuid": "7d5606dd-226b-4b69-9c23-6affe70360e8", 00:24:00.188 "optimal_io_boundary": 0, 00:24:00.188 "md_size": 0, 00:24:00.188 "dif_type": 0, 00:24:00.188 "dif_is_head_of_md": false, 00:24:00.188 "dif_pi_format": 0 00:24:00.188 } 00:24:00.188 }, 00:24:00.188 { 00:24:00.188 "method": "bdev_wait_for_examine" 00:24:00.188 } 00:24:00.188 ] 00:24:00.188 }, 00:24:00.188 { 00:24:00.188 "subsystem": "nbd", 00:24:00.188 "config": [] 00:24:00.188 }, 00:24:00.188 { 00:24:00.188 "subsystem": "scheduler", 00:24:00.188 "config": [ 00:24:00.188 { 00:24:00.188 "method": "framework_set_scheduler", 00:24:00.188 "params": { 00:24:00.188 "name": "static" 00:24:00.188 } 00:24:00.188 } 00:24:00.188 ] 00:24:00.188 }, 00:24:00.188 { 00:24:00.188 "subsystem": "nvmf", 00:24:00.188 "config": [ 00:24:00.188 { 00:24:00.188 "method": "nvmf_set_config", 00:24:00.188 "params": { 00:24:00.188 "discovery_filter": "match_any", 00:24:00.188 "admin_cmd_passthru": { 00:24:00.188 "identify_ctrlr": false 00:24:00.188 }, 00:24:00.188 "dhchap_digests": [ 00:24:00.188 "sha256", 00:24:00.188 "sha384", 00:24:00.188 "sha512" 00:24:00.188 ], 00:24:00.188 "dhchap_dhgroups": [ 00:24:00.188 "null", 00:24:00.188 "ffdhe2048", 00:24:00.188 "ffdhe3072", 00:24:00.188 "ffdhe4096", 00:24:00.188 "ffdhe6144", 00:24:00.188 "ffdhe8192" 00:24:00.188 ] 00:24:00.188 } 00:24:00.188 }, 00:24:00.188 { 00:24:00.188 "method": "nvmf_set_max_subsystems", 00:24:00.188 "params": { 00:24:00.188 "max_subsystems": 1024 00:24:00.188 } 00:24:00.188 }, 00:24:00.188 { 00:24:00.188 "method": "nvmf_set_crdt", 00:24:00.188 "params": { 00:24:00.188 "crdt1": 0, 00:24:00.188 "crdt2": 0, 00:24:00.188 "crdt3": 0 00:24:00.188 } 00:24:00.188 }, 00:24:00.188 { 00:24:00.188 "method": "nvmf_create_transport", 00:24:00.188 "params": { 00:24:00.188 "trtype": "TCP", 00:24:00.188 "max_queue_depth": 128, 00:24:00.188 "max_io_qpairs_per_ctrlr": 127, 00:24:00.188 "in_capsule_data_size": 4096, 00:24:00.188 "max_io_size": 131072, 00:24:00.188 "io_unit_size": 131072, 00:24:00.188 "max_aq_depth": 128, 00:24:00.188 "num_shared_buffers": 511, 00:24:00.188 "buf_cache_size": 4294967295, 00:24:00.188 "dif_insert_or_strip": false, 00:24:00.188 "zcopy": false, 00:24:00.188 "c2h_success": false, 00:24:00.188 "sock_priority": 0, 00:24:00.188 "abort_timeout_sec": 1, 00:24:00.188 "ack_timeout": 0, 00:24:00.188 "data_wr_pool_size": 0 00:24:00.188 } 00:24:00.188 }, 00:24:00.188 { 00:24:00.188 "method": "nvmf_create_subsystem", 00:24:00.188 "params": { 00:24:00.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.188 "allow_any_host": false, 00:24:00.188 "serial_number": "00000000000000000000", 00:24:00.188 "model_number": "SPDK bdev Controller", 00:24:00.188 "max_namespaces": 32, 00:24:00.188 "min_cntlid": 1, 00:24:00.188 "max_cntlid": 65519, 00:24:00.188 "ana_reporting": false 00:24:00.188 } 00:24:00.188 }, 00:24:00.188 { 00:24:00.188 "method": "nvmf_subsystem_add_host", 00:24:00.188 "params": { 00:24:00.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.188 "host": "nqn.2016-06.io.spdk:host1", 00:24:00.188 "psk": "key0" 00:24:00.188 } 00:24:00.188 }, 00:24:00.188 { 00:24:00.188 "method": "nvmf_subsystem_add_ns", 00:24:00.188 "params": { 00:24:00.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.188 "namespace": { 00:24:00.188 "nsid": 1, 00:24:00.188 "bdev_name": "malloc0", 00:24:00.188 "nguid": "7D5606DD226B4B699C236AFFE70360E8", 00:24:00.188 "uuid": "7d5606dd-226b-4b69-9c23-6affe70360e8", 00:24:00.188 "no_auto_visible": false 00:24:00.188 } 00:24:00.188 } 00:24:00.188 }, 00:24:00.188 { 00:24:00.188 "method": "nvmf_subsystem_add_listener", 00:24:00.188 "params": { 00:24:00.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.188 "listen_address": { 00:24:00.188 "trtype": "TCP", 00:24:00.188 "adrfam": "IPv4", 00:24:00.188 "traddr": "10.0.0.2", 00:24:00.188 "trsvcid": "4420" 00:24:00.188 }, 00:24:00.188 "secure_channel": false, 00:24:00.188 "sock_impl": "ssl" 00:24:00.188 } 00:24:00.188 } 00:24:00.188 ] 00:24:00.188 } 00:24:00.188 ] 00:24:00.188 }' 00:24:00.188 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.188 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.188 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=280744 00:24:00.188 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:00.188 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 280744 00:24:00.188 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280744 ']' 00:24:00.188 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.188 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.188 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.188 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.188 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.188 [2024-11-19 03:05:10.800706] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:00.188 [2024-11-19 03:05:10.800798] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.448 [2024-11-19 03:05:10.870545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.448 [2024-11-19 03:05:10.909782] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.448 [2024-11-19 03:05:10.909844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.448 [2024-11-19 03:05:10.909874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.448 [2024-11-19 03:05:10.909885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.448 [2024-11-19 03:05:10.909895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.448 [2024-11-19 03:05:10.910522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.706 [2024-11-19 03:05:11.147362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.706 [2024-11-19 03:05:11.179395] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:00.706 [2024-11-19 03:05:11.179621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.273 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.273 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:01.273 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:01.273 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:01.273 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.273 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.273 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=280895 00:24:01.273 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 280895 /var/tmp/bdevperf.sock 00:24:01.273 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280895 ']' 00:24:01.273 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.273 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.273 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:01.273 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.273 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.273 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:01.273 "subsystems": [ 00:24:01.273 { 00:24:01.273 "subsystem": "keyring", 00:24:01.273 "config": [ 00:24:01.273 { 00:24:01.273 "method": "keyring_file_add_key", 00:24:01.273 "params": { 00:24:01.273 "name": "key0", 00:24:01.273 "path": "/tmp/tmp.PF0f2EVTX1" 00:24:01.273 } 00:24:01.273 } 00:24:01.273 ] 00:24:01.273 }, 00:24:01.273 { 00:24:01.273 "subsystem": "iobuf", 00:24:01.273 "config": [ 00:24:01.273 { 00:24:01.273 "method": "iobuf_set_options", 00:24:01.273 "params": { 00:24:01.273 "small_pool_count": 8192, 00:24:01.273 "large_pool_count": 1024, 00:24:01.273 "small_bufsize": 8192, 00:24:01.273 "large_bufsize": 135168, 00:24:01.273 "enable_numa": false 00:24:01.273 } 00:24:01.273 } 00:24:01.273 ] 00:24:01.273 }, 00:24:01.273 { 00:24:01.273 "subsystem": "sock", 00:24:01.273 "config": [ 00:24:01.273 { 00:24:01.273 "method": "sock_set_default_impl", 00:24:01.273 "params": { 00:24:01.273 "impl_name": "posix" 00:24:01.273 } 00:24:01.273 }, 00:24:01.273 { 00:24:01.273 "method": "sock_impl_set_options", 00:24:01.273 "params": { 00:24:01.273 "impl_name": "ssl", 00:24:01.273 "recv_buf_size": 4096, 00:24:01.273 "send_buf_size": 4096, 00:24:01.273 "enable_recv_pipe": true, 00:24:01.273 "enable_quickack": false, 00:24:01.273 "enable_placement_id": 0, 00:24:01.273 "enable_zerocopy_send_server": true, 00:24:01.273 "enable_zerocopy_send_client": false, 00:24:01.273 "zerocopy_threshold": 0, 00:24:01.273 "tls_version": 0, 00:24:01.273 "enable_ktls": false 00:24:01.273 } 00:24:01.273 }, 00:24:01.273 { 00:24:01.273 "method": "sock_impl_set_options", 00:24:01.273 "params": { 00:24:01.273 "impl_name": "posix", 00:24:01.273 "recv_buf_size": 2097152, 00:24:01.273 "send_buf_size": 2097152, 00:24:01.273 "enable_recv_pipe": true, 00:24:01.273 "enable_quickack": false, 00:24:01.273 "enable_placement_id": 0, 00:24:01.273 "enable_zerocopy_send_server": true, 00:24:01.273 "enable_zerocopy_send_client": false, 00:24:01.273 "zerocopy_threshold": 0, 00:24:01.273 "tls_version": 0, 00:24:01.273 "enable_ktls": false 00:24:01.273 } 00:24:01.273 } 00:24:01.273 ] 00:24:01.273 }, 00:24:01.273 { 00:24:01.273 "subsystem": "vmd", 00:24:01.273 "config": [] 00:24:01.273 }, 00:24:01.273 { 00:24:01.273 "subsystem": "accel", 00:24:01.273 "config": [ 00:24:01.273 { 00:24:01.273 "method": "accel_set_options", 00:24:01.273 "params": { 00:24:01.273 "small_cache_size": 128, 00:24:01.273 "large_cache_size": 16, 00:24:01.273 "task_count": 2048, 00:24:01.273 "sequence_count": 2048, 00:24:01.273 "buf_count": 2048 00:24:01.274 } 00:24:01.274 } 00:24:01.274 ] 00:24:01.274 }, 00:24:01.274 { 00:24:01.274 "subsystem": "bdev", 00:24:01.274 "config": [ 00:24:01.274 { 00:24:01.274 "method": "bdev_set_options", 00:24:01.274 "params": { 00:24:01.274 "bdev_io_pool_size": 65535, 00:24:01.274 "bdev_io_cache_size": 256, 00:24:01.274 "bdev_auto_examine": true, 00:24:01.274 "iobuf_small_cache_size": 128, 00:24:01.274 "iobuf_large_cache_size": 16 00:24:01.274 } 00:24:01.274 }, 00:24:01.274 { 00:24:01.274 "method": "bdev_raid_set_options", 00:24:01.274 "params": { 00:24:01.274 "process_window_size_kb": 1024, 00:24:01.274 "process_max_bandwidth_mb_sec": 0 00:24:01.274 } 00:24:01.274 }, 00:24:01.274 { 00:24:01.274 "method": "bdev_iscsi_set_options", 00:24:01.274 "params": { 00:24:01.274 "timeout_sec": 30 00:24:01.274 } 00:24:01.274 }, 00:24:01.274 { 00:24:01.274 "method": "bdev_nvme_set_options", 00:24:01.274 "params": { 00:24:01.274 "action_on_timeout": "none", 00:24:01.274 "timeout_us": 0, 00:24:01.274 "timeout_admin_us": 0, 00:24:01.274 "keep_alive_timeout_ms": 10000, 00:24:01.274 "arbitration_burst": 0, 00:24:01.274 "low_priority_weight": 0, 00:24:01.274 "medium_priority_weight": 0, 00:24:01.274 "high_priority_weight": 0, 00:24:01.274 "nvme_adminq_poll_period_us": 10000, 00:24:01.274 "nvme_ioq_poll_period_us": 0, 00:24:01.274 "io_queue_requests": 512, 00:24:01.274 "delay_cmd_submit": true, 00:24:01.274 "transport_retry_count": 4, 00:24:01.274 "bdev_retry_count": 3, 00:24:01.274 "transport_ack_timeout": 0, 00:24:01.274 "ctrlr_loss_timeout_sec": 0, 00:24:01.274 "reconnect_delay_sec": 0, 00:24:01.274 "fast_io_fail_timeout_sec": 0, 00:24:01.274 "disable_auto_failback": false, 00:24:01.274 "generate_uuids": false, 00:24:01.274 "transport_tos": 0, 00:24:01.274 "nvme_error_stat": false, 00:24:01.274 "rdma_srq_size": 0, 00:24:01.274 "io_path_stat": false, 00:24:01.274 "allow_accel_sequence": false, 00:24:01.274 "rdma_max_cq_size": 0, 00:24:01.274 "rdma_cm_event_timeout_ms": 0, 00:24:01.274 "dhchap_digests": [ 00:24:01.274 "sha256", 00:24:01.274 "sha384", 00:24:01.274 "sha512" 00:24:01.274 ], 00:24:01.274 "dhchap_dhgroups": [ 00:24:01.274 "null", 00:24:01.274 "ffdhe2048", 00:24:01.274 "ffdhe3072", 00:24:01.274 "ffdhe4096", 00:24:01.274 "ffdhe6144", 00:24:01.274 "ffdhe8192" 00:24:01.274 ] 00:24:01.274 } 00:24:01.274 }, 00:24:01.274 { 00:24:01.274 "method": "bdev_nvme_attach_controller", 00:24:01.274 "params": { 00:24:01.274 "name": "nvme0", 00:24:01.274 "trtype": "TCP", 00:24:01.274 "adrfam": "IPv4", 00:24:01.274 "traddr": "10.0.0.2", 00:24:01.274 "trsvcid": "4420", 00:24:01.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.274 "prchk_reftag": false, 00:24:01.274 "prchk_guard": false, 00:24:01.274 "ctrlr_loss_timeout_sec": 0, 00:24:01.274 "reconnect_delay_sec": 0, 00:24:01.274 "fast_io_fail_timeout_sec": 0, 00:24:01.274 "psk": "key0", 00:24:01.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.274 "hdgst": false, 00:24:01.274 "ddgst": false, 00:24:01.274 "multipath": "multipath" 00:24:01.274 } 00:24:01.274 }, 00:24:01.274 { 00:24:01.274 "method": "bdev_nvme_set_hotplug", 00:24:01.274 "params": { 00:24:01.274 "period_us": 100000, 00:24:01.274 "enable": false 00:24:01.274 } 00:24:01.274 }, 00:24:01.274 { 00:24:01.274 "method": "bdev_enable_histogram", 00:24:01.274 "params": { 00:24:01.274 "name": "nvme0n1", 00:24:01.274 "enable": true 00:24:01.274 } 00:24:01.274 }, 00:24:01.274 { 00:24:01.274 "method": "bdev_wait_for_examine" 00:24:01.274 } 00:24:01.274 ] 00:24:01.274 }, 00:24:01.274 { 00:24:01.274 "subsystem": "nbd", 00:24:01.274 "config": [] 00:24:01.274 } 00:24:01.274 ] 00:24:01.274 }' 00:24:01.274 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.274 [2024-11-19 03:05:11.858237] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:01.274 [2024-11-19 03:05:11.858311] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280895 ] 00:24:01.531 [2024-11-19 03:05:11.927583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.531 [2024-11-19 03:05:11.974178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.531 [2024-11-19 03:05:12.143292] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:01.787 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.787 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:01.787 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:01.787 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:02.044 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.044 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:02.044 Running I/O for 1 seconds... 00:24:03.416 3212.00 IOPS, 12.55 MiB/s 00:24:03.416 Latency(us) 00:24:03.416 [2024-11-19T02:05:14.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.416 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:03.416 Verification LBA range: start 0x0 length 0x2000 00:24:03.416 nvme0n1 : 1.02 3278.22 12.81 0.00 0.00 38708.37 7621.59 46020.84 00:24:03.416 [2024-11-19T02:05:14.031Z] =================================================================================================================== 00:24:03.416 [2024-11-19T02:05:14.031Z] Total : 3278.22 12.81 0.00 0.00 38708.37 7621.59 46020.84 00:24:03.416 { 00:24:03.416 "results": [ 00:24:03.416 { 00:24:03.416 "job": "nvme0n1", 00:24:03.416 "core_mask": "0x2", 00:24:03.416 "workload": "verify", 00:24:03.416 "status": "finished", 00:24:03.416 "verify_range": { 00:24:03.416 "start": 0, 00:24:03.416 "length": 8192 00:24:03.416 }, 00:24:03.417 "queue_depth": 128, 00:24:03.417 "io_size": 4096, 00:24:03.417 "runtime": 1.018845, 00:24:03.417 "iops": 3278.2219081410813, 00:24:03.417 "mibps": 12.805554328676099, 00:24:03.417 "io_failed": 0, 00:24:03.417 "io_timeout": 0, 00:24:03.417 "avg_latency_us": 38708.36820935906, 00:24:03.417 "min_latency_us": 7621.594074074074, 00:24:03.417 "max_latency_us": 46020.83555555555 00:24:03.417 } 00:24:03.417 ], 00:24:03.417 "core_count": 1 00:24:03.417 } 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:03.417 nvmf_trace.0 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 280895 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280895 ']' 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280895 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280895 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280895' 00:24:03.417 killing process with pid 280895 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280895 00:24:03.417 Received shutdown signal, test time was about 1.000000 seconds 00:24:03.417 00:24:03.417 Latency(us) 00:24:03.417 [2024-11-19T02:05:14.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.417 [2024-11-19T02:05:14.032Z] =================================================================================================================== 00:24:03.417 [2024-11-19T02:05:14.032Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.417 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280895 00:24:03.417 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:03.417 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.417 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:03.417 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.417 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:03.417 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.417 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.417 rmmod nvme_tcp 00:24:03.674 rmmod nvme_fabrics 00:24:03.674 rmmod nvme_keyring 00:24:03.674 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.674 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:03.674 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:03.674 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 280744 ']' 00:24:03.674 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 280744 00:24:03.674 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280744 ']' 00:24:03.674 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280744 00:24:03.674 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:03.674 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.674 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280744 00:24:03.674 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:03.675 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:03.675 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280744' 00:24:03.675 killing process with pid 280744 00:24:03.675 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280744 00:24:03.675 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280744 00:24:03.933 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:03.933 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:03.933 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:03.933 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:03.933 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:03.933 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:03.933 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:03.933 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:03.933 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:03.933 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.933 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.933 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.842 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:05.842 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3hXEtMil6x /tmp/tmp.mBE4aLJ8b9 /tmp/tmp.PF0f2EVTX1 00:24:05.842 00:24:05.842 real 1m22.226s 00:24:05.842 user 2m18.565s 00:24:05.842 sys 0m24.156s 00:24:05.842 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.842 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.842 ************************************ 00:24:05.842 END TEST nvmf_tls 00:24:05.842 ************************************ 00:24:05.842 03:05:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:05.842 03:05:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:05.842 03:05:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.842 03:05:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:05.842 ************************************ 00:24:05.842 START TEST nvmf_fips 00:24:05.842 ************************************ 00:24:05.842 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:06.101 * Looking for test storage... 00:24:06.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:06.101 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:06.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.102 --rc genhtml_branch_coverage=1 00:24:06.102 --rc genhtml_function_coverage=1 00:24:06.102 --rc genhtml_legend=1 00:24:06.102 --rc geninfo_all_blocks=1 00:24:06.102 --rc geninfo_unexecuted_blocks=1 00:24:06.102 00:24:06.102 ' 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:06.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.102 --rc genhtml_branch_coverage=1 00:24:06.102 --rc genhtml_function_coverage=1 00:24:06.102 --rc genhtml_legend=1 00:24:06.102 --rc geninfo_all_blocks=1 00:24:06.102 --rc geninfo_unexecuted_blocks=1 00:24:06.102 00:24:06.102 ' 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:06.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.102 --rc genhtml_branch_coverage=1 00:24:06.102 --rc genhtml_function_coverage=1 00:24:06.102 --rc genhtml_legend=1 00:24:06.102 --rc geninfo_all_blocks=1 00:24:06.102 --rc geninfo_unexecuted_blocks=1 00:24:06.102 00:24:06.102 ' 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:06.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.102 --rc genhtml_branch_coverage=1 00:24:06.102 --rc genhtml_function_coverage=1 00:24:06.102 --rc genhtml_legend=1 00:24:06.102 --rc geninfo_all_blocks=1 00:24:06.102 --rc geninfo_unexecuted_blocks=1 00:24:06.102 00:24:06.102 ' 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:06.102 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:06.103 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:06.362 Error setting digest 00:24:06.362 40725AA5697F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:06.362 40725AA5697F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.362 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:08.897 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:08.898 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:08.898 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:08.898 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:08.898 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.898 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:08.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:24:08.898 00:24:08.898 --- 10.0.0.2 ping statistics --- 00:24:08.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.898 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:24:08.898 00:24:08.898 --- 10.0.0.1 ping statistics --- 00:24:08.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.898 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=283134 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 283134 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 283134 ']' 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.898 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:08.898 [2024-11-19 03:05:19.213243] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:08.898 [2024-11-19 03:05:19.213321] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.898 [2024-11-19 03:05:19.291618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.898 [2024-11-19 03:05:19.339422] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.898 [2024-11-19 03:05:19.339486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.898 [2024-11-19 03:05:19.339515] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.898 [2024-11-19 03:05:19.339527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.898 [2024-11-19 03:05:19.339538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.898 [2024-11-19 03:05:19.340202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.899 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.899 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:08.899 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:08.899 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:08.899 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:08.899 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.899 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:08.899 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:08.899 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:08.899 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.kUH 00:24:08.899 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:08.899 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.kUH 00:24:08.899 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.kUH 00:24:08.899 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.kUH 00:24:08.899 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:09.466 [2024-11-19 03:05:19.801054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.466 [2024-11-19 03:05:19.817076] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:09.466 [2024-11-19 03:05:19.817316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.466 malloc0 00:24:09.466 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:09.466 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=283283 00:24:09.466 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:09.466 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 283283 /var/tmp/bdevperf.sock 00:24:09.466 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 283283 ']' 00:24:09.466 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:09.466 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.466 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:09.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:09.466 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.466 03:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:09.466 [2024-11-19 03:05:19.950498] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:09.466 [2024-11-19 03:05:19.950580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283283 ] 00:24:09.466 [2024-11-19 03:05:20.016904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.466 [2024-11-19 03:05:20.068096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.724 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.724 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:09.724 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.kUH 00:24:09.982 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:10.240 [2024-11-19 03:05:20.724847] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:10.240 TLSTESTn1 00:24:10.240 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:10.498 Running I/O for 10 seconds... 00:24:12.366 3414.00 IOPS, 13.34 MiB/s [2024-11-19T02:05:24.356Z] 3430.50 IOPS, 13.40 MiB/s [2024-11-19T02:05:24.923Z] 3409.67 IOPS, 13.32 MiB/s [2024-11-19T02:05:26.337Z] 3422.50 IOPS, 13.37 MiB/s [2024-11-19T02:05:26.984Z] 3424.20 IOPS, 13.38 MiB/s [2024-11-19T02:05:28.036Z] 3399.67 IOPS, 13.28 MiB/s [2024-11-19T02:05:29.017Z] 3394.00 IOPS, 13.26 MiB/s [2024-11-19T02:05:29.950Z] 3384.75 IOPS, 13.22 MiB/s [2024-11-19T02:05:31.321Z] 3386.00 IOPS, 13.23 MiB/s [2024-11-19T02:05:31.321Z] 3389.50 IOPS, 13.24 MiB/s 00:24:20.706 Latency(us) 00:24:20.706 [2024-11-19T02:05:31.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.706 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:20.706 Verification LBA range: start 0x0 length 0x2000 00:24:20.706 TLSTESTn1 : 10.02 3395.94 13.27 0.00 0.00 37633.23 6310.87 33787.45 00:24:20.706 [2024-11-19T02:05:31.321Z] =================================================================================================================== 00:24:20.706 [2024-11-19T02:05:31.321Z] Total : 3395.94 13.27 0.00 0.00 37633.23 6310.87 33787.45 00:24:20.706 { 00:24:20.706 "results": [ 00:24:20.706 { 00:24:20.706 "job": "TLSTESTn1", 00:24:20.706 "core_mask": "0x4", 00:24:20.706 "workload": "verify", 00:24:20.706 "status": "finished", 00:24:20.706 "verify_range": { 00:24:20.706 "start": 0, 00:24:20.706 "length": 8192 00:24:20.706 }, 00:24:20.706 "queue_depth": 128, 00:24:20.706 "io_size": 4096, 00:24:20.706 "runtime": 10.018738, 00:24:20.706 "iops": 3395.936693823114, 00:24:20.706 "mibps": 13.26537771024654, 00:24:20.706 "io_failed": 0, 00:24:20.706 "io_timeout": 0, 00:24:20.706 "avg_latency_us": 37633.231904256485, 00:24:20.706 "min_latency_us": 6310.874074074074, 00:24:20.706 "max_latency_us": 33787.44888888889 00:24:20.706 } 00:24:20.706 ], 00:24:20.706 "core_count": 1 00:24:20.706 } 00:24:20.706 03:05:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:20.706 03:05:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:20.706 03:05:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:20.706 03:05:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:20.706 03:05:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:20.706 03:05:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:20.706 03:05:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:20.706 03:05:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:20.706 03:05:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:20.706 03:05:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:20.706 nvmf_trace.0 00:24:20.706 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:20.706 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 283283 00:24:20.706 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 283283 ']' 00:24:20.706 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 283283 00:24:20.706 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:20.706 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.706 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283283 00:24:20.706 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:20.706 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:20.706 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283283' 00:24:20.706 killing process with pid 283283 00:24:20.706 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 283283 00:24:20.706 Received shutdown signal, test time was about 10.000000 seconds 00:24:20.706 00:24:20.706 Latency(us) 00:24:20.706 [2024-11-19T02:05:31.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.706 [2024-11-19T02:05:31.321Z] =================================================================================================================== 00:24:20.706 [2024-11-19T02:05:31.321Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.706 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 283283 00:24:20.706 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:20.706 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:20.706 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:20.706 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:20.707 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:20.707 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:20.707 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:20.707 rmmod nvme_tcp 00:24:20.707 rmmod nvme_fabrics 00:24:20.965 rmmod nvme_keyring 00:24:20.965 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:20.965 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:20.965 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:20.965 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 283134 ']' 00:24:20.965 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 283134 00:24:20.965 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 283134 ']' 00:24:20.965 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 283134 00:24:20.965 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:20.965 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.965 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283134 00:24:20.965 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:20.965 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:20.965 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283134' 00:24:20.965 killing process with pid 283134 00:24:20.965 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 283134 00:24:20.965 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 283134 00:24:21.225 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:21.225 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:21.225 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:21.225 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:21.225 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:21.225 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:21.225 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:21.225 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:21.225 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:21.225 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.225 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.225 03:05:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.132 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:23.132 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.kUH 00:24:23.132 00:24:23.132 real 0m17.229s 00:24:23.132 user 0m22.824s 00:24:23.132 sys 0m5.357s 00:24:23.132 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.132 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:23.132 ************************************ 00:24:23.132 END TEST nvmf_fips 00:24:23.132 ************************************ 00:24:23.132 03:05:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:23.132 03:05:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:23.132 03:05:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.132 03:05:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:23.132 ************************************ 00:24:23.132 START TEST nvmf_control_msg_list 00:24:23.132 ************************************ 00:24:23.132 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:23.392 * Looking for test storage... 00:24:23.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:23.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.392 --rc genhtml_branch_coverage=1 00:24:23.392 --rc genhtml_function_coverage=1 00:24:23.392 --rc genhtml_legend=1 00:24:23.392 --rc geninfo_all_blocks=1 00:24:23.392 --rc geninfo_unexecuted_blocks=1 00:24:23.392 00:24:23.392 ' 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:23.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.392 --rc genhtml_branch_coverage=1 00:24:23.392 --rc genhtml_function_coverage=1 00:24:23.392 --rc genhtml_legend=1 00:24:23.392 --rc geninfo_all_blocks=1 00:24:23.392 --rc geninfo_unexecuted_blocks=1 00:24:23.392 00:24:23.392 ' 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:23.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.392 --rc genhtml_branch_coverage=1 00:24:23.392 --rc genhtml_function_coverage=1 00:24:23.392 --rc genhtml_legend=1 00:24:23.392 --rc geninfo_all_blocks=1 00:24:23.392 --rc geninfo_unexecuted_blocks=1 00:24:23.392 00:24:23.392 ' 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:23.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.392 --rc genhtml_branch_coverage=1 00:24:23.392 --rc genhtml_function_coverage=1 00:24:23.392 --rc genhtml_legend=1 00:24:23.392 --rc geninfo_all_blocks=1 00:24:23.392 --rc geninfo_unexecuted_blocks=1 00:24:23.392 00:24:23.392 ' 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.392 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:23.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:23.393 03:05:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:25.929 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:25.929 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:25.929 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:25.929 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:25.929 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:25.930 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:25.930 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.930 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:25.930 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:25.930 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:25.930 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:25.930 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:25.930 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:25.930 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:25.930 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.930 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:25.930 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:25.930 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:25.930 03:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:25.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:24:25.930 00:24:25.930 --- 10.0.0.2 ping statistics --- 00:24:25.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.930 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:25.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:24:25.930 00:24:25.930 --- 10.0.0.1 ping statistics --- 00:24:25.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.930 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=286564 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 286564 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 286564 ']' 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:25.930 [2024-11-19 03:05:36.175865] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:25.930 [2024-11-19 03:05:36.175952] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.930 [2024-11-19 03:05:36.249094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.930 [2024-11-19 03:05:36.296520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.930 [2024-11-19 03:05:36.296583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.930 [2024-11-19 03:05:36.296596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.930 [2024-11-19 03:05:36.296607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.930 [2024-11-19 03:05:36.296631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.930 [2024-11-19 03:05:36.297304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:25.930 [2024-11-19 03:05:36.437338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:25.930 Malloc0 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:25.930 [2024-11-19 03:05:36.476914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=286703 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=286704 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:25.930 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=286705 00:24:25.931 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 286703 00:24:25.931 03:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:26.189 [2024-11-19 03:05:36.555978] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:26.189 [2024-11-19 03:05:36.556333] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:26.189 [2024-11-19 03:05:36.556591] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:27.123 Initializing NVMe Controllers 00:24:27.123 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:27.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:27.123 Initialization complete. Launching workers. 00:24:27.123 ======================================================== 00:24:27.123 Latency(us) 00:24:27.123 Device Information : IOPS MiB/s Average min max 00:24:27.123 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40988.60 40862.52 41899.54 00:24:27.123 ======================================================== 00:24:27.123 Total : 25.00 0.10 40988.60 40862.52 41899.54 00:24:27.123 00:24:27.381 Initializing NVMe Controllers 00:24:27.381 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:27.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:27.381 Initialization complete. Launching workers. 00:24:27.381 ======================================================== 00:24:27.381 Latency(us) 00:24:27.381 Device Information : IOPS MiB/s Average min max 00:24:27.382 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 24.00 0.09 41828.91 40922.82 41954.76 00:24:27.382 ======================================================== 00:24:27.382 Total : 24.00 0.09 41828.91 40922.82 41954.76 00:24:27.382 00:24:27.382 Initializing NVMe Controllers 00:24:27.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:27.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:27.382 Initialization complete. Launching workers. 00:24:27.382 ======================================================== 00:24:27.382 Latency(us) 00:24:27.382 Device Information : IOPS MiB/s Average min max 00:24:27.382 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 24.00 0.09 41837.73 40908.90 41986.07 00:24:27.382 ======================================================== 00:24:27.382 Total : 24.00 0.09 41837.73 40908.90 41986.07 00:24:27.382 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 286704 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 286705 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:27.382 rmmod nvme_tcp 00:24:27.382 rmmod nvme_fabrics 00:24:27.382 rmmod nvme_keyring 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 286564 ']' 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 286564 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 286564 ']' 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 286564 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 286564 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 286564' 00:24:27.382 killing process with pid 286564 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 286564 00:24:27.382 03:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 286564 00:24:27.641 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:27.641 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:27.641 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:27.641 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:27.641 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:27.641 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:27.641 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:27.641 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:27.641 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:27.641 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.641 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.641 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:30.179 00:24:30.179 real 0m6.470s 00:24:30.179 user 0m6.185s 00:24:30.179 sys 0m2.539s 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:30.179 ************************************ 00:24:30.179 END TEST nvmf_control_msg_list 00:24:30.179 ************************************ 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:30.179 ************************************ 00:24:30.179 START TEST nvmf_wait_for_buf 00:24:30.179 ************************************ 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:30.179 * Looking for test storage... 00:24:30.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.179 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:30.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.180 --rc genhtml_branch_coverage=1 00:24:30.180 --rc genhtml_function_coverage=1 00:24:30.180 --rc genhtml_legend=1 00:24:30.180 --rc geninfo_all_blocks=1 00:24:30.180 --rc geninfo_unexecuted_blocks=1 00:24:30.180 00:24:30.180 ' 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:30.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.180 --rc genhtml_branch_coverage=1 00:24:30.180 --rc genhtml_function_coverage=1 00:24:30.180 --rc genhtml_legend=1 00:24:30.180 --rc geninfo_all_blocks=1 00:24:30.180 --rc geninfo_unexecuted_blocks=1 00:24:30.180 00:24:30.180 ' 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:30.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.180 --rc genhtml_branch_coverage=1 00:24:30.180 --rc genhtml_function_coverage=1 00:24:30.180 --rc genhtml_legend=1 00:24:30.180 --rc geninfo_all_blocks=1 00:24:30.180 --rc geninfo_unexecuted_blocks=1 00:24:30.180 00:24:30.180 ' 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:30.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.180 --rc genhtml_branch_coverage=1 00:24:30.180 --rc genhtml_function_coverage=1 00:24:30.180 --rc genhtml_legend=1 00:24:30.180 --rc geninfo_all_blocks=1 00:24:30.180 --rc geninfo_unexecuted_blocks=1 00:24:30.180 00:24:30.180 ' 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.180 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:30.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:30.181 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:32.084 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:32.084 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:32.084 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:32.084 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:32.084 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.085 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.085 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.085 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:32.085 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.085 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.085 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:32.085 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:32.085 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.085 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.085 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:32.085 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:32.085 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.085 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.085 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:32.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:24:32.344 00:24:32.344 --- 10.0.0.2 ping statistics --- 00:24:32.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.344 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:24:32.344 00:24:32.344 --- 10.0.0.1 ping statistics --- 00:24:32.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.344 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=288788 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 288788 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 288788 ']' 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.344 03:05:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.344 [2024-11-19 03:05:42.835158] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:32.344 [2024-11-19 03:05:42.835231] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.344 [2024-11-19 03:05:42.905732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.344 [2024-11-19 03:05:42.949119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.344 [2024-11-19 03:05:42.949177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.344 [2024-11-19 03:05:42.949206] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.344 [2024-11-19 03:05:42.949217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.344 [2024-11-19 03:05:42.949227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.344 [2024-11-19 03:05:42.949849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.604 Malloc0 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.604 [2024-11-19 03:05:43.193392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.604 [2024-11-19 03:05:43.217635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.604 03:05:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:32.861 [2024-11-19 03:05:43.302800] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:34.249 Initializing NVMe Controllers 00:24:34.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:34.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:34.249 Initialization complete. Launching workers. 00:24:34.249 ======================================================== 00:24:34.249 Latency(us) 00:24:34.249 Device Information : IOPS MiB/s Average min max 00:24:34.249 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32353.95 7999.00 63878.03 00:24:34.249 ======================================================== 00:24:34.249 Total : 129.00 16.12 32353.95 7999.00 63878.03 00:24:34.249 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:34.249 rmmod nvme_tcp 00:24:34.249 rmmod nvme_fabrics 00:24:34.249 rmmod nvme_keyring 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 288788 ']' 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 288788 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 288788 ']' 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 288788 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 288788 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 288788' 00:24:34.249 killing process with pid 288788 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 288788 00:24:34.249 03:05:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 288788 00:24:34.510 03:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:34.510 03:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:34.510 03:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:34.510 03:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:34.510 03:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:34.510 03:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:34.510 03:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:34.510 03:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:34.510 03:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:34.510 03:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.510 03:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.510 03:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:37.048 00:24:37.048 real 0m6.836s 00:24:37.048 user 0m3.146s 00:24:37.048 sys 0m2.041s 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:37.048 ************************************ 00:24:37.048 END TEST nvmf_wait_for_buf 00:24:37.048 ************************************ 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:37.048 ************************************ 00:24:37.048 START TEST nvmf_fuzz 00:24:37.048 ************************************ 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:37.048 * Looking for test storage... 00:24:37.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:37.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.048 --rc genhtml_branch_coverage=1 00:24:37.048 --rc genhtml_function_coverage=1 00:24:37.048 --rc genhtml_legend=1 00:24:37.048 --rc geninfo_all_blocks=1 00:24:37.048 --rc geninfo_unexecuted_blocks=1 00:24:37.048 00:24:37.048 ' 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:37.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.048 --rc genhtml_branch_coverage=1 00:24:37.048 --rc genhtml_function_coverage=1 00:24:37.048 --rc genhtml_legend=1 00:24:37.048 --rc geninfo_all_blocks=1 00:24:37.048 --rc geninfo_unexecuted_blocks=1 00:24:37.048 00:24:37.048 ' 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:37.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.048 --rc genhtml_branch_coverage=1 00:24:37.048 --rc genhtml_function_coverage=1 00:24:37.048 --rc genhtml_legend=1 00:24:37.048 --rc geninfo_all_blocks=1 00:24:37.048 --rc geninfo_unexecuted_blocks=1 00:24:37.048 00:24:37.048 ' 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:37.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.048 --rc genhtml_branch_coverage=1 00:24:37.048 --rc genhtml_function_coverage=1 00:24:37.048 --rc genhtml_legend=1 00:24:37.048 --rc geninfo_all_blocks=1 00:24:37.048 --rc geninfo_unexecuted_blocks=1 00:24:37.048 00:24:37.048 ' 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.048 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:37.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:37.049 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:38.949 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:38.949 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:38.949 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:38.949 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.949 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:24:38.950 00:24:38.950 --- 10.0.0.2 ping statistics --- 00:24:38.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.950 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:24:38.950 00:24:38.950 --- 10.0.0.1 ping statistics --- 00:24:38.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.950 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=290995 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 290995 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 290995 ']' 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.950 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:39.515 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.515 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:39.515 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:39.516 Malloc0 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:39.516 03:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:11.578 Fuzzing completed. Shutting down the fuzz application 00:25:11.578 00:25:11.578 Dumping successful admin opcodes: 00:25:11.578 8, 9, 10, 24, 00:25:11.578 Dumping successful io opcodes: 00:25:11.578 0, 9, 00:25:11.578 NS: 0x2000008eff00 I/O qp, Total commands completed: 505390, total successful commands: 2910, random_seed: 3049308608 00:25:11.578 NS: 0x2000008eff00 admin qp, Total commands completed: 60384, total successful commands: 479, random_seed: 2208967104 00:25:11.578 03:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:11.578 Fuzzing completed. Shutting down the fuzz application 00:25:11.578 00:25:11.578 Dumping successful admin opcodes: 00:25:11.578 24, 00:25:11.578 Dumping successful io opcodes: 00:25:11.578 00:25:11.578 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1688923176 00:25:11.578 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1689036468 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:11.578 rmmod nvme_tcp 00:25:11.578 rmmod nvme_fabrics 00:25:11.578 rmmod nvme_keyring 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 290995 ']' 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 290995 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 290995 ']' 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 290995 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.578 03:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290995 00:25:11.578 03:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:11.578 03:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:11.578 03:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290995' 00:25:11.578 killing process with pid 290995 00:25:11.578 03:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 290995 00:25:11.578 03:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 290995 00:25:11.838 03:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:11.838 03:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:11.838 03:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:11.838 03:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:11.838 03:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:11.838 03:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:11.838 03:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:11.838 03:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:11.838 03:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:11.838 03:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.838 03:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.838 03:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.740 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:13.740 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:13.740 00:25:13.740 real 0m37.219s 00:25:13.740 user 0m52.058s 00:25:13.740 sys 0m13.991s 00:25:13.740 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.740 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.740 ************************************ 00:25:13.740 END TEST nvmf_fuzz 00:25:13.740 ************************************ 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:13.999 ************************************ 00:25:13.999 START TEST nvmf_multiconnection 00:25:13.999 ************************************ 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:13.999 * Looking for test storage... 00:25:13.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:13.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.999 --rc genhtml_branch_coverage=1 00:25:13.999 --rc genhtml_function_coverage=1 00:25:13.999 --rc genhtml_legend=1 00:25:13.999 --rc geninfo_all_blocks=1 00:25:13.999 --rc geninfo_unexecuted_blocks=1 00:25:13.999 00:25:13.999 ' 00:25:13.999 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:14.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.000 --rc genhtml_branch_coverage=1 00:25:14.000 --rc genhtml_function_coverage=1 00:25:14.000 --rc genhtml_legend=1 00:25:14.000 --rc geninfo_all_blocks=1 00:25:14.000 --rc geninfo_unexecuted_blocks=1 00:25:14.000 00:25:14.000 ' 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:14.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.000 --rc genhtml_branch_coverage=1 00:25:14.000 --rc genhtml_function_coverage=1 00:25:14.000 --rc genhtml_legend=1 00:25:14.000 --rc geninfo_all_blocks=1 00:25:14.000 --rc geninfo_unexecuted_blocks=1 00:25:14.000 00:25:14.000 ' 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:14.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.000 --rc genhtml_branch_coverage=1 00:25:14.000 --rc genhtml_function_coverage=1 00:25:14.000 --rc genhtml_legend=1 00:25:14.000 --rc geninfo_all_blocks=1 00:25:14.000 --rc geninfo_unexecuted_blocks=1 00:25:14.000 00:25:14.000 ' 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:14.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:14.000 03:06:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:16.529 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:16.529 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.529 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:16.530 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:16.530 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:16.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:25:16.530 00:25:16.530 --- 10.0.0.2 ping statistics --- 00:25:16.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.530 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:25:16.530 00:25:16.530 --- 10.0.0.1 ping statistics --- 00:25:16.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.530 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=297340 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 297340 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 297340 ']' 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.530 03:06:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.530 [2024-11-19 03:06:26.885952] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:25:16.530 [2024-11-19 03:06:26.886067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.530 [2024-11-19 03:06:26.960756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:16.530 [2024-11-19 03:06:27.007591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.530 [2024-11-19 03:06:27.007646] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.530 [2024-11-19 03:06:27.007686] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.530 [2024-11-19 03:06:27.007707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.530 [2024-11-19 03:06:27.007718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.530 [2024-11-19 03:06:27.009171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.530 [2024-11-19 03:06:27.009196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.530 [2024-11-19 03:06:27.009255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:16.530 [2024-11-19 03:06:27.009258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.530 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.530 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:16.530 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:16.530 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:16.530 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.530 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.530 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:16.530 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.530 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.788 [2024-11-19 03:06:27.149238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.788 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.788 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:16.788 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.788 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:16.788 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.788 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.788 Malloc1 00:25:16.788 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.788 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:16.788 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.788 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.788 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.788 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:16.788 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.789 [2024-11-19 03:06:27.214641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.789 Malloc2 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.789 Malloc3 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.789 Malloc4 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.789 Malloc5 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.789 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.048 Malloc6 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.048 Malloc7 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.048 Malloc8 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.048 Malloc9 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.048 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.049 Malloc10 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.049 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.307 Malloc11 00:25:17.307 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.307 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:17.307 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.307 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.307 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.307 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:17.307 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.307 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.307 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.307 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:17.307 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.307 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.307 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.307 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:17.307 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.307 03:06:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:17.872 03:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:17.872 03:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:17.872 03:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:17.872 03:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:17.872 03:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:19.768 03:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:19.768 03:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:19.768 03:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:19.768 03:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:19.768 03:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:19.768 03:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:19.768 03:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.768 03:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:20.333 03:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:20.334 03:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:20.334 03:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:20.334 03:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:20.334 03:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:22.858 03:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:22.858 03:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:22.858 03:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:22.858 03:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:22.858 03:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:22.858 03:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:22.858 03:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.858 03:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:23.115 03:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:23.115 03:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:23.115 03:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:23.115 03:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:23.115 03:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:25.640 03:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:25.640 03:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:25.640 03:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:25.640 03:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:25.640 03:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:25.640 03:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:25.640 03:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.640 03:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:25.897 03:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:25.897 03:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:25.897 03:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:25.897 03:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:25.897 03:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:27.793 03:06:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:27.793 03:06:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:27.793 03:06:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:27.793 03:06:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:27.793 03:06:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:27.793 03:06:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:27.793 03:06:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.793 03:06:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:28.726 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:28.726 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:28.726 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:28.726 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:28.726 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:31.253 03:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:31.253 03:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:31.253 03:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:31.253 03:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:31.253 03:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:31.253 03:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:31.253 03:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.253 03:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:31.511 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:31.511 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:31.511 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:31.511 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:31.511 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:34.035 03:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:34.035 03:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:34.035 03:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:34.035 03:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:34.035 03:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.035 03:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:34.035 03:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.035 03:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:34.600 03:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:34.600 03:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:34.600 03:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:34.600 03:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:34.600 03:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:36.494 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:36.495 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:36.495 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:36.495 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:36.495 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:36.495 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:36.495 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:36.495 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:37.426 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:37.426 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:37.426 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:37.426 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:37.426 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:39.322 03:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:39.322 03:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:39.322 03:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:39.322 03:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:39.322 03:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:39.322 03:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:39.322 03:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.322 03:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:40.254 03:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:40.254 03:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:40.254 03:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:40.254 03:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:40.254 03:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:42.150 03:06:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:42.150 03:06:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:42.150 03:06:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:42.150 03:06:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:42.150 03:06:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:42.150 03:06:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:42.150 03:06:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.150 03:06:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:43.083 03:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:43.083 03:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:43.083 03:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:43.083 03:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:43.083 03:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:44.980 03:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:44.980 03:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:44.980 03:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:45.237 03:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:45.237 03:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:45.237 03:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:45.237 03:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:45.237 03:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:46.169 03:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:46.169 03:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:46.169 03:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:46.169 03:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:46.169 03:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:48.064 03:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:48.064 03:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:48.064 03:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:25:48.064 03:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:48.064 03:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:48.064 03:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:48.064 03:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:48.064 [global] 00:25:48.064 thread=1 00:25:48.064 invalidate=1 00:25:48.064 rw=read 00:25:48.064 time_based=1 00:25:48.064 runtime=10 00:25:48.064 ioengine=libaio 00:25:48.064 direct=1 00:25:48.064 bs=262144 00:25:48.064 iodepth=64 00:25:48.064 norandommap=1 00:25:48.064 numjobs=1 00:25:48.064 00:25:48.064 [job0] 00:25:48.064 filename=/dev/nvme0n1 00:25:48.064 [job1] 00:25:48.064 filename=/dev/nvme10n1 00:25:48.064 [job2] 00:25:48.064 filename=/dev/nvme1n1 00:25:48.064 [job3] 00:25:48.064 filename=/dev/nvme2n1 00:25:48.064 [job4] 00:25:48.064 filename=/dev/nvme3n1 00:25:48.064 [job5] 00:25:48.064 filename=/dev/nvme4n1 00:25:48.064 [job6] 00:25:48.064 filename=/dev/nvme5n1 00:25:48.064 [job7] 00:25:48.064 filename=/dev/nvme6n1 00:25:48.064 [job8] 00:25:48.064 filename=/dev/nvme7n1 00:25:48.064 [job9] 00:25:48.064 filename=/dev/nvme8n1 00:25:48.064 [job10] 00:25:48.064 filename=/dev/nvme9n1 00:25:48.064 Could not set queue depth (nvme0n1) 00:25:48.064 Could not set queue depth (nvme10n1) 00:25:48.064 Could not set queue depth (nvme1n1) 00:25:48.064 Could not set queue depth (nvme2n1) 00:25:48.064 Could not set queue depth (nvme3n1) 00:25:48.064 Could not set queue depth (nvme4n1) 00:25:48.064 Could not set queue depth (nvme5n1) 00:25:48.064 Could not set queue depth (nvme6n1) 00:25:48.064 Could not set queue depth (nvme7n1) 00:25:48.064 Could not set queue depth (nvme8n1) 00:25:48.064 Could not set queue depth (nvme9n1) 00:25:48.321 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.321 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.321 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.321 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.321 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.321 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.321 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.321 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.321 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.321 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.321 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.321 fio-3.35 00:25:48.321 Starting 11 threads 00:26:00.517 00:26:00.517 job0: (groupid=0, jobs=1): err= 0: pid=301605: Tue Nov 19 03:07:09 2024 00:26:00.517 read: IOPS=182, BW=45.5MiB/s (47.8MB/s)(466MiB/10220msec) 00:26:00.517 slat (usec): min=9, max=562991, avg=3517.65, stdev=29448.31 00:26:00.517 clat (usec): min=1257, max=1892.0k, avg=347424.53, stdev=432252.85 00:26:00.517 lat (usec): min=1316, max=1892.0k, avg=350942.19, stdev=436550.76 00:26:00.517 clat percentiles (msec): 00:26:00.517 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 35], 20.00th=[ 43], 00:26:00.517 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 72], 60.00th=[ 201], 00:26:00.517 | 70.00th=[ 518], 80.00th=[ 735], 90.00th=[ 1011], 95.00th=[ 1284], 00:26:00.517 | 99.00th=[ 1636], 99.50th=[ 1670], 99.90th=[ 1888], 99.95th=[ 1888], 00:26:00.517 | 99.99th=[ 1888] 00:26:00.517 bw ( KiB/s): min= 6144, max=246272, per=7.57%, avg=46053.45, stdev=67685.82, samples=20 00:26:00.517 iops : min= 24, max= 962, avg=179.85, stdev=264.42, samples=20 00:26:00.517 lat (msec) : 2=0.11%, 4=4.30%, 10=1.18%, 20=0.70%, 50=36.25% 00:26:00.517 lat (msec) : 100=9.61%, 250=11.22%, 500=5.96%, 750=12.30%, 1000=8.16% 00:26:00.517 lat (msec) : 2000=10.20% 00:26:00.517 cpu : usr=0.06%, sys=0.69%, ctx=439, majf=0, minf=4097 00:26:00.517 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:26:00.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.517 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.517 issued rwts: total=1862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.517 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.517 job1: (groupid=0, jobs=1): err= 0: pid=301606: Tue Nov 19 03:07:09 2024 00:26:00.517 read: IOPS=125, BW=31.4MiB/s (33.0MB/s)(320MiB/10166msec) 00:26:00.517 slat (usec): min=13, max=502461, avg=7836.56, stdev=39776.37 00:26:00.517 clat (msec): min=62, max=1864, avg=500.77, stdev=373.17 00:26:00.517 lat (msec): min=62, max=1864, avg=508.61, stdev=378.65 00:26:00.517 clat percentiles (msec): 00:26:00.517 | 1.00th=[ 87], 5.00th=[ 127], 10.00th=[ 153], 20.00th=[ 230], 00:26:00.517 | 30.00th=[ 275], 40.00th=[ 296], 50.00th=[ 338], 60.00th=[ 409], 00:26:00.517 | 70.00th=[ 558], 80.00th=[ 827], 90.00th=[ 1116], 95.00th=[ 1250], 00:26:00.517 | 99.00th=[ 1620], 99.50th=[ 1770], 99.90th=[ 1871], 99.95th=[ 1871], 00:26:00.517 | 99.99th=[ 1871] 00:26:00.517 bw ( KiB/s): min= 3584, max=114688, per=5.11%, avg=31076.60, stdev=26738.75, samples=20 00:26:00.517 iops : min= 14, max= 448, avg=121.35, stdev=104.47, samples=20 00:26:00.517 lat (msec) : 100=2.50%, 250=19.56%, 500=42.88%, 750=13.07%, 1000=7.90% 00:26:00.517 lat (msec) : 2000=14.08% 00:26:00.517 cpu : usr=0.06%, sys=0.54%, ctx=140, majf=0, minf=4097 00:26:00.517 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.1% 00:26:00.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.517 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.517 issued rwts: total=1278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.517 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.517 job2: (groupid=0, jobs=1): err= 0: pid=301607: Tue Nov 19 03:07:09 2024 00:26:00.517 read: IOPS=342, BW=85.5MiB/s (89.7MB/s)(859MiB/10045msec) 00:26:00.517 slat (usec): min=10, max=219005, avg=2768.08, stdev=11090.69 00:26:00.517 clat (usec): min=1587, max=607040, avg=184167.97, stdev=108767.56 00:26:00.517 lat (usec): min=1631, max=658443, avg=186936.05, stdev=110390.93 00:26:00.517 clat percentiles (msec): 00:26:00.517 | 1.00th=[ 9], 5.00th=[ 47], 10.00th=[ 65], 20.00th=[ 94], 00:26:00.517 | 30.00th=[ 142], 40.00th=[ 155], 50.00th=[ 165], 60.00th=[ 176], 00:26:00.517 | 70.00th=[ 190], 80.00th=[ 262], 90.00th=[ 338], 95.00th=[ 409], 00:26:00.517 | 99.00th=[ 518], 99.50th=[ 550], 99.90th=[ 609], 99.95th=[ 609], 00:26:00.517 | 99.99th=[ 609] 00:26:00.517 bw ( KiB/s): min=33280, max=250880, per=14.19%, avg=86367.70, stdev=49496.74, samples=20 00:26:00.517 iops : min= 130, max= 980, avg=337.35, stdev=193.36, samples=20 00:26:00.517 lat (msec) : 2=0.06%, 4=0.15%, 10=1.66%, 20=0.29%, 50=3.11% 00:26:00.517 lat (msec) : 100=15.22%, 250=58.68%, 500=19.09%, 750=1.75% 00:26:00.517 cpu : usr=0.19%, sys=1.11%, ctx=542, majf=0, minf=4097 00:26:00.517 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:00.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.517 issued rwts: total=3437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.517 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.517 job3: (groupid=0, jobs=1): err= 0: pid=301608: Tue Nov 19 03:07:09 2024 00:26:00.517 read: IOPS=274, BW=68.7MiB/s (72.0MB/s)(700MiB/10181msec) 00:26:00.517 slat (usec): min=9, max=197744, avg=3318.54, stdev=14849.88 00:26:00.518 clat (msec): min=11, max=1076, avg=229.34, stdev=237.43 00:26:00.518 lat (msec): min=11, max=1076, avg=232.66, stdev=241.20 00:26:00.518 clat percentiles (msec): 00:26:00.518 | 1.00th=[ 19], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 42], 00:26:00.518 | 30.00th=[ 70], 40.00th=[ 110], 50.00th=[ 146], 60.00th=[ 176], 00:26:00.518 | 70.00th=[ 228], 80.00th=[ 397], 90.00th=[ 651], 95.00th=[ 785], 00:26:00.518 | 99.00th=[ 919], 99.50th=[ 961], 99.90th=[ 978], 99.95th=[ 1083], 00:26:00.518 | 99.99th=[ 1083] 00:26:00.518 bw ( KiB/s): min=14336, max=297984, per=11.50%, avg=70008.20, stdev=76257.03, samples=20 00:26:00.518 iops : min= 56, max= 1164, avg=273.45, stdev=297.88, samples=20 00:26:00.518 lat (msec) : 20=1.18%, 50=22.30%, 100=14.80%, 250=34.56%, 500=11.47% 00:26:00.518 lat (msec) : 750=8.58%, 1000=7.04%, 2000=0.07% 00:26:00.518 cpu : usr=0.16%, sys=0.95%, ctx=478, majf=0, minf=4097 00:26:00.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.7% 00:26:00.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.518 issued rwts: total=2798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.518 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.518 job4: (groupid=0, jobs=1): err= 0: pid=301609: Tue Nov 19 03:07:09 2024 00:26:00.518 read: IOPS=198, BW=49.6MiB/s (52.1MB/s)(506MiB/10183msec) 00:26:00.518 slat (usec): min=8, max=219083, avg=4661.18, stdev=17944.76 00:26:00.518 clat (usec): min=1749, max=1071.7k, avg=317390.96, stdev=230527.36 00:26:00.518 lat (usec): min=1838, max=1076.8k, avg=322052.14, stdev=234147.11 00:26:00.518 clat percentiles (msec): 00:26:00.518 | 1.00th=[ 3], 5.00th=[ 47], 10.00th=[ 97], 20.00th=[ 148], 00:26:00.518 | 30.00th=[ 171], 40.00th=[ 199], 50.00th=[ 234], 60.00th=[ 279], 00:26:00.518 | 70.00th=[ 359], 80.00th=[ 535], 90.00th=[ 709], 95.00th=[ 810], 00:26:00.518 | 99.00th=[ 885], 99.50th=[ 911], 99.90th=[ 944], 99.95th=[ 953], 00:26:00.518 | 99.99th=[ 1070] 00:26:00.518 bw ( KiB/s): min=16384, max=136192, per=8.23%, avg=50112.95, stdev=35142.93, samples=20 00:26:00.518 iops : min= 64, max= 532, avg=195.70, stdev=137.26, samples=20 00:26:00.518 lat (msec) : 2=0.05%, 4=2.52%, 20=0.25%, 50=2.42%, 100=5.29% 00:26:00.518 lat (msec) : 250=43.32%, 500=24.04%, 750=14.84%, 1000=7.22%, 2000=0.05% 00:26:00.518 cpu : usr=0.09%, sys=0.76%, ctx=387, majf=0, minf=4098 00:26:00.518 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:00.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.518 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.518 issued rwts: total=2022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.518 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.518 job5: (groupid=0, jobs=1): err= 0: pid=301610: Tue Nov 19 03:07:09 2024 00:26:00.518 read: IOPS=232, BW=58.2MiB/s (61.0MB/s)(593MiB/10185msec) 00:26:00.518 slat (usec): min=9, max=628672, avg=2009.69, stdev=19337.04 00:26:00.518 clat (usec): min=1531, max=1432.5k, avg=272789.12, stdev=325324.15 00:26:00.518 lat (usec): min=1559, max=1432.5k, avg=274798.81, stdev=327000.65 00:26:00.518 clat percentiles (msec): 00:26:00.518 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 31], 20.00th=[ 59], 00:26:00.518 | 30.00th=[ 64], 40.00th=[ 79], 50.00th=[ 125], 60.00th=[ 192], 00:26:00.518 | 70.00th=[ 305], 80.00th=[ 414], 90.00th=[ 919], 95.00th=[ 1062], 00:26:00.518 | 99.00th=[ 1217], 99.50th=[ 1267], 99.90th=[ 1435], 99.95th=[ 1435], 00:26:00.518 | 99.99th=[ 1435] 00:26:00.518 bw ( KiB/s): min= 8704, max=221184, per=10.21%, avg=62135.95, stdev=57446.53, samples=19 00:26:00.518 iops : min= 34, max= 864, avg=242.63, stdev=224.47, samples=19 00:26:00.518 lat (msec) : 2=0.08%, 4=0.55%, 10=5.99%, 20=0.38%, 50=9.16% 00:26:00.518 lat (msec) : 100=27.13%, 250=19.66%, 500=21.56%, 750=2.78%, 1000=4.81% 00:26:00.518 lat (msec) : 2000=7.89% 00:26:00.518 cpu : usr=0.11%, sys=0.72%, ctx=711, majf=0, minf=3721 00:26:00.518 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:00.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.518 issued rwts: total=2370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.518 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.518 job6: (groupid=0, jobs=1): err= 0: pid=301611: Tue Nov 19 03:07:09 2024 00:26:00.518 read: IOPS=374, BW=93.7MiB/s (98.2MB/s)(941MiB/10049msec) 00:26:00.518 slat (usec): min=9, max=154576, avg=2181.46, stdev=9512.26 00:26:00.518 clat (usec): min=1547, max=1092.6k, avg=168512.01, stdev=123260.24 00:26:00.518 lat (usec): min=1612, max=1092.6k, avg=170693.47, stdev=124782.13 00:26:00.518 clat percentiles (msec): 00:26:00.518 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 38], 00:26:00.518 | 30.00th=[ 108], 40.00th=[ 150], 50.00th=[ 161], 60.00th=[ 171], 00:26:00.518 | 70.00th=[ 186], 80.00th=[ 232], 90.00th=[ 355], 95.00th=[ 405], 00:26:00.518 | 99.00th=[ 527], 99.50th=[ 558], 99.90th=[ 995], 99.95th=[ 1099], 00:26:00.518 | 99.99th=[ 1099] 00:26:00.518 bw ( KiB/s): min=27136, max=322048, per=15.57%, avg=94762.25, stdev=63424.84, samples=20 00:26:00.518 iops : min= 106, max= 1258, avg=370.15, stdev=247.75, samples=20 00:26:00.518 lat (msec) : 2=0.05%, 4=0.08%, 20=0.03%, 50=23.00%, 100=5.76% 00:26:00.518 lat (msec) : 250=53.39%, 500=16.33%, 750=1.09%, 1000=0.21%, 2000=0.05% 00:26:00.518 cpu : usr=0.19%, sys=1.48%, ctx=774, majf=0, minf=4097 00:26:00.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:26:00.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.518 issued rwts: total=3765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.518 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.518 job7: (groupid=0, jobs=1): err= 0: pid=301612: Tue Nov 19 03:07:09 2024 00:26:00.518 read: IOPS=137, BW=34.3MiB/s (36.0MB/s)(349MiB/10168msec) 00:26:00.518 slat (usec): min=9, max=530863, avg=6565.02, stdev=35189.48 00:26:00.518 clat (msec): min=27, max=1576, avg=459.21, stdev=332.63 00:26:00.518 lat (msec): min=27, max=1576, avg=465.78, stdev=338.04 00:26:00.518 clat percentiles (msec): 00:26:00.518 | 1.00th=[ 47], 5.00th=[ 148], 10.00th=[ 167], 20.00th=[ 222], 00:26:00.518 | 30.00th=[ 257], 40.00th=[ 284], 50.00th=[ 334], 60.00th=[ 401], 00:26:00.518 | 70.00th=[ 468], 80.00th=[ 693], 90.00th=[ 1028], 95.00th=[ 1267], 00:26:00.518 | 99.00th=[ 1418], 99.50th=[ 1452], 99.90th=[ 1569], 99.95th=[ 1569], 00:26:00.518 | 99.99th=[ 1569] 00:26:00.518 bw ( KiB/s): min= 7168, max=80384, per=5.60%, avg=34098.10, stdev=23004.56, samples=20 00:26:00.518 iops : min= 28, max= 314, avg=133.15, stdev=89.91, samples=20 00:26:00.518 lat (msec) : 50=1.22%, 100=1.07%, 250=24.43%, 500=45.13%, 750=11.75% 00:26:00.518 lat (msec) : 1000=5.16%, 2000=11.25% 00:26:00.518 cpu : usr=0.05%, sys=0.52%, ctx=163, majf=0, minf=4097 00:26:00.518 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:26:00.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.518 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.518 issued rwts: total=1396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.518 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.518 job8: (groupid=0, jobs=1): err= 0: pid=301613: Tue Nov 19 03:07:09 2024 00:26:00.518 read: IOPS=305, BW=76.4MiB/s (80.1MB/s)(778MiB/10180msec) 00:26:00.518 slat (usec): min=8, max=654258, avg=1559.93, stdev=19292.13 00:26:00.518 clat (usec): min=1187, max=1698.5k, avg=207679.29, stdev=304351.77 00:26:00.518 lat (usec): min=1454, max=1698.5k, avg=209239.22, stdev=306770.99 00:26:00.518 clat percentiles (usec): 00:26:00.518 | 1.00th=[ 1909], 5.00th=[ 2671], 10.00th=[ 3982], 00:26:00.518 | 20.00th=[ 17695], 30.00th=[ 23462], 40.00th=[ 45351], 00:26:00.518 | 50.00th=[ 62129], 60.00th=[ 80217], 70.00th=[ 166724], 00:26:00.518 | 80.00th=[ 387974], 90.00th=[ 725615], 95.00th=[ 851444], 00:26:00.518 | 99.00th=[1199571], 99.50th=[1551893], 99.90th=[1568670], 00:26:00.518 | 99.95th=[1702888], 99.99th=[1702888] 00:26:00.518 bw ( KiB/s): min=11776, max=327680, per=12.82%, avg=78053.95, stdev=90168.94, samples=20 00:26:00.518 iops : min= 46, max= 1280, avg=304.85, stdev=352.23, samples=20 00:26:00.518 lat (msec) : 2=1.80%, 4=8.26%, 10=4.11%, 20=10.13%, 50=18.19% 00:26:00.518 lat (msec) : 100=20.96%, 250=10.99%, 500=8.49%, 750=8.94%, 1000=4.79% 00:26:00.518 lat (msec) : 2000=3.34% 00:26:00.518 cpu : usr=0.17%, sys=0.99%, ctx=1210, majf=0, minf=4097 00:26:00.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:00.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.518 issued rwts: total=3111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.518 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.518 job9: (groupid=0, jobs=1): err= 0: pid=301614: Tue Nov 19 03:07:09 2024 00:26:00.518 read: IOPS=108, BW=27.1MiB/s (28.4MB/s)(275MiB/10156msec) 00:26:00.518 slat (usec): min=9, max=614549, avg=5675.58, stdev=36548.52 00:26:00.518 clat (msec): min=98, max=1624, avg=584.56, stdev=285.50 00:26:00.518 lat (msec): min=98, max=1624, avg=590.23, stdev=291.34 00:26:00.518 clat percentiles (msec): 00:26:00.518 | 1.00th=[ 101], 5.00th=[ 155], 10.00th=[ 205], 20.00th=[ 292], 00:26:00.518 | 30.00th=[ 368], 40.00th=[ 460], 50.00th=[ 625], 60.00th=[ 718], 00:26:00.518 | 70.00th=[ 776], 80.00th=[ 827], 90.00th=[ 969], 95.00th=[ 1011], 00:26:00.518 | 99.00th=[ 1133], 99.50th=[ 1183], 99.90th=[ 1620], 99.95th=[ 1620], 00:26:00.518 | 99.99th=[ 1620] 00:26:00.518 bw ( KiB/s): min= 7680, max=45568, per=4.36%, avg=26517.20, stdev=10675.03, samples=20 00:26:00.518 iops : min= 30, max= 178, avg=103.55, stdev=41.64, samples=20 00:26:00.518 lat (msec) : 100=0.91%, 250=12.73%, 500=29.55%, 750=21.27%, 1000=29.55% 00:26:00.518 lat (msec) : 2000=6.00% 00:26:00.518 cpu : usr=0.03%, sys=0.39%, ctx=199, majf=0, minf=4097 00:26:00.518 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.3% 00:26:00.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.518 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.518 issued rwts: total=1100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.518 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.518 job10: (groupid=0, jobs=1): err= 0: pid=301615: Tue Nov 19 03:07:09 2024 00:26:00.518 read: IOPS=113, BW=28.5MiB/s (29.9MB/s)(290MiB/10168msec) 00:26:00.518 slat (usec): min=12, max=443849, avg=7723.23, stdev=37633.50 00:26:00.519 clat (msec): min=26, max=1582, avg=553.24, stdev=392.70 00:26:00.519 lat (msec): min=26, max=1582, avg=560.96, stdev=398.36 00:26:00.519 clat percentiles (msec): 00:26:00.519 | 1.00th=[ 42], 5.00th=[ 95], 10.00th=[ 169], 20.00th=[ 241], 00:26:00.519 | 30.00th=[ 284], 40.00th=[ 326], 50.00th=[ 384], 60.00th=[ 558], 00:26:00.519 | 70.00th=[ 718], 80.00th=[ 894], 90.00th=[ 1217], 95.00th=[ 1385], 00:26:00.519 | 99.00th=[ 1569], 99.50th=[ 1586], 99.90th=[ 1586], 99.95th=[ 1586], 00:26:00.519 | 99.99th=[ 1586] 00:26:00.519 bw ( KiB/s): min= 7680, max=68096, per=4.61%, avg=28030.60, stdev=19255.27, samples=20 00:26:00.519 iops : min= 30, max= 266, avg=109.45, stdev=75.25, samples=20 00:26:00.519 lat (msec) : 50=1.47%, 100=4.23%, 250=17.77%, 500=34.51%, 750=15.88% 00:26:00.519 lat (msec) : 1000=8.97%, 2000=17.17% 00:26:00.519 cpu : usr=0.05%, sys=0.47%, ctx=155, majf=0, minf=4097 00:26:00.519 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.6% 00:26:00.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.519 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.519 issued rwts: total=1159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.519 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.519 00:26:00.519 Run status group 0 (all jobs): 00:26:00.519 READ: bw=594MiB/s (623MB/s), 27.1MiB/s-93.7MiB/s (28.4MB/s-98.2MB/s), io=6075MiB (6370MB), run=10045-10220msec 00:26:00.519 00:26:00.519 Disk stats (read/write): 00:26:00.519 nvme0n1: ios=3722/0, merge=0/0, ticks=1277675/0, in_queue=1277675, util=97.43% 00:26:00.519 nvme10n1: ios=2409/0, merge=0/0, ticks=1222042/0, in_queue=1222042, util=97.57% 00:26:00.519 nvme1n1: ios=6672/0, merge=0/0, ticks=1243241/0, in_queue=1243241, util=97.83% 00:26:00.519 nvme2n1: ios=5468/0, merge=0/0, ticks=1194729/0, in_queue=1194729, util=97.97% 00:26:00.519 nvme3n1: ios=3917/0, merge=0/0, ticks=1205203/0, in_queue=1205203, util=98.04% 00:26:00.519 nvme4n1: ios=4612/0, merge=0/0, ticks=1170667/0, in_queue=1170667, util=98.35% 00:26:00.519 nvme5n1: ios=7376/0, merge=0/0, ticks=1240418/0, in_queue=1240418, util=98.50% 00:26:00.519 nvme6n1: ios=2651/0, merge=0/0, ticks=1224173/0, in_queue=1224173, util=98.56% 00:26:00.519 nvme7n1: ios=6096/0, merge=0/0, ticks=1213121/0, in_queue=1213121, util=98.95% 00:26:00.519 nvme8n1: ios=2056/0, merge=0/0, ticks=1229051/0, in_queue=1229051, util=99.12% 00:26:00.519 nvme9n1: ios=2190/0, merge=0/0, ticks=1215147/0, in_queue=1215147, util=99.25% 00:26:00.519 03:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:00.519 [global] 00:26:00.519 thread=1 00:26:00.519 invalidate=1 00:26:00.519 rw=randwrite 00:26:00.519 time_based=1 00:26:00.519 runtime=10 00:26:00.519 ioengine=libaio 00:26:00.519 direct=1 00:26:00.519 bs=262144 00:26:00.519 iodepth=64 00:26:00.519 norandommap=1 00:26:00.519 numjobs=1 00:26:00.519 00:26:00.519 [job0] 00:26:00.519 filename=/dev/nvme0n1 00:26:00.519 [job1] 00:26:00.519 filename=/dev/nvme10n1 00:26:00.519 [job2] 00:26:00.519 filename=/dev/nvme1n1 00:26:00.519 [job3] 00:26:00.519 filename=/dev/nvme2n1 00:26:00.519 [job4] 00:26:00.519 filename=/dev/nvme3n1 00:26:00.519 [job5] 00:26:00.519 filename=/dev/nvme4n1 00:26:00.519 [job6] 00:26:00.519 filename=/dev/nvme5n1 00:26:00.519 [job7] 00:26:00.519 filename=/dev/nvme6n1 00:26:00.519 [job8] 00:26:00.519 filename=/dev/nvme7n1 00:26:00.519 [job9] 00:26:00.519 filename=/dev/nvme8n1 00:26:00.519 [job10] 00:26:00.519 filename=/dev/nvme9n1 00:26:00.519 Could not set queue depth (nvme0n1) 00:26:00.519 Could not set queue depth (nvme10n1) 00:26:00.519 Could not set queue depth (nvme1n1) 00:26:00.519 Could not set queue depth (nvme2n1) 00:26:00.519 Could not set queue depth (nvme3n1) 00:26:00.519 Could not set queue depth (nvme4n1) 00:26:00.519 Could not set queue depth (nvme5n1) 00:26:00.519 Could not set queue depth (nvme6n1) 00:26:00.519 Could not set queue depth (nvme7n1) 00:26:00.519 Could not set queue depth (nvme8n1) 00:26:00.519 Could not set queue depth (nvme9n1) 00:26:00.519 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.519 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.519 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.519 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.519 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.519 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.519 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.519 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.519 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.519 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.519 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.519 fio-3.35 00:26:00.519 Starting 11 threads 00:26:10.488 00:26:10.488 job0: (groupid=0, jobs=1): err= 0: pid=302343: Tue Nov 19 03:07:20 2024 00:26:10.488 write: IOPS=248, BW=62.1MiB/s (65.1MB/s)(641MiB/10313msec); 0 zone resets 00:26:10.488 slat (usec): min=23, max=65297, avg=2876.40, stdev=7964.74 00:26:10.488 clat (usec): min=964, max=1083.7k, avg=254371.20, stdev=175539.00 00:26:10.488 lat (usec): min=1028, max=1083.7k, avg=257247.61, stdev=177706.35 00:26:10.488 clat percentiles (msec): 00:26:10.488 | 1.00th=[ 12], 5.00th=[ 37], 10.00th=[ 52], 20.00th=[ 105], 00:26:10.488 | 30.00th=[ 167], 40.00th=[ 194], 50.00th=[ 213], 60.00th=[ 247], 00:26:10.488 | 70.00th=[ 300], 80.00th=[ 405], 90.00th=[ 489], 95.00th=[ 550], 00:26:10.488 | 99.00th=[ 827], 99.50th=[ 911], 99.90th=[ 1036], 99.95th=[ 1083], 00:26:10.488 | 99.99th=[ 1083] 00:26:10.488 bw ( KiB/s): min=20480, max=133120, per=7.16%, avg=63975.30, stdev=32357.80, samples=20 00:26:10.488 iops : min= 80, max= 520, avg=249.85, stdev=126.36, samples=20 00:26:10.489 lat (usec) : 1000=0.12% 00:26:10.489 lat (msec) : 2=0.35%, 10=0.27%, 20=1.17%, 50=7.80%, 100=9.52% 00:26:10.489 lat (msec) : 250=41.05%, 500=30.63%, 750=6.83%, 1000=2.03%, 2000=0.23% 00:26:10.489 cpu : usr=0.72%, sys=0.77%, ctx=1338, majf=0, minf=1 00:26:10.489 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.5% 00:26:10.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.489 issued rwts: total=0,2563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.489 job1: (groupid=0, jobs=1): err= 0: pid=302355: Tue Nov 19 03:07:20 2024 00:26:10.489 write: IOPS=180, BW=45.2MiB/s (47.4MB/s)(466MiB/10310msec); 0 zone resets 00:26:10.489 slat (usec): min=30, max=82584, avg=4250.89, stdev=10868.52 00:26:10.489 clat (msec): min=8, max=1079, avg=349.67, stdev=194.26 00:26:10.489 lat (msec): min=8, max=1079, avg=353.92, stdev=196.99 00:26:10.489 clat percentiles (msec): 00:26:10.489 | 1.00th=[ 9], 5.00th=[ 49], 10.00th=[ 104], 20.00th=[ 180], 00:26:10.489 | 30.00th=[ 226], 40.00th=[ 266], 50.00th=[ 330], 60.00th=[ 418], 00:26:10.489 | 70.00th=[ 472], 80.00th=[ 527], 90.00th=[ 575], 95.00th=[ 634], 00:26:10.489 | 99.00th=[ 835], 99.50th=[ 961], 99.90th=[ 1083], 99.95th=[ 1083], 00:26:10.489 | 99.99th=[ 1083] 00:26:10.489 bw ( KiB/s): min=20480, max=90624, per=5.16%, avg=46048.10, stdev=21207.70, samples=20 00:26:10.489 iops : min= 80, max= 354, avg=179.85, stdev=82.82, samples=20 00:26:10.489 lat (msec) : 10=3.06%, 20=0.64%, 50=1.40%, 100=3.92%, 250=27.00% 00:26:10.489 lat (msec) : 500=39.83%, 750=21.04%, 1000=2.79%, 2000=0.32% 00:26:10.489 cpu : usr=0.67%, sys=0.60%, ctx=910, majf=0, minf=1 00:26:10.489 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:26:10.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.489 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.489 issued rwts: total=0,1863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.489 job2: (groupid=0, jobs=1): err= 0: pid=302356: Tue Nov 19 03:07:20 2024 00:26:10.489 write: IOPS=485, BW=121MiB/s (127MB/s)(1251MiB/10314msec); 0 zone resets 00:26:10.489 slat (usec): min=16, max=79740, avg=1132.61, stdev=5043.90 00:26:10.489 clat (usec): min=685, max=1094.7k, avg=130688.37, stdev=144734.34 00:26:10.489 lat (usec): min=717, max=1094.7k, avg=131820.98, stdev=146453.13 00:26:10.489 clat percentiles (msec): 00:26:10.489 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 15], 20.00th=[ 33], 00:26:10.489 | 30.00th=[ 55], 40.00th=[ 65], 50.00th=[ 82], 60.00th=[ 111], 00:26:10.489 | 70.00th=[ 136], 80.00th=[ 184], 90.00th=[ 321], 95.00th=[ 384], 00:26:10.489 | 99.00th=[ 793], 99.50th=[ 844], 99.90th=[ 1045], 99.95th=[ 1045], 00:26:10.489 | 99.99th=[ 1099] 00:26:10.489 bw ( KiB/s): min=20480, max=304640, per=14.16%, avg=126460.60, stdev=88248.78, samples=20 00:26:10.489 iops : min= 80, max= 1190, avg=493.95, stdev=344.75, samples=20 00:26:10.489 lat (usec) : 750=0.04%, 1000=0.20% 00:26:10.489 lat (msec) : 2=0.48%, 4=1.54%, 10=4.82%, 20=7.29%, 50=12.63% 00:26:10.489 lat (msec) : 100=28.82%, 250=28.42%, 500=12.47%, 750=2.14%, 1000=1.04% 00:26:10.489 lat (msec) : 2000=0.12% 00:26:10.489 cpu : usr=1.55%, sys=1.91%, ctx=3483, majf=0, minf=1 00:26:10.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:26:10.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.489 issued rwts: total=0,5004,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.489 job3: (groupid=0, jobs=1): err= 0: pid=302357: Tue Nov 19 03:07:20 2024 00:26:10.489 write: IOPS=225, BW=56.3MiB/s (59.0MB/s)(567MiB/10078msec); 0 zone resets 00:26:10.489 slat (usec): min=14, max=44011, avg=3775.22, stdev=8762.49 00:26:10.489 clat (msec): min=11, max=818, avg=280.37, stdev=158.77 00:26:10.489 lat (msec): min=11, max=823, avg=284.15, stdev=161.19 00:26:10.489 clat percentiles (msec): 00:26:10.489 | 1.00th=[ 44], 5.00th=[ 93], 10.00th=[ 108], 20.00th=[ 148], 00:26:10.489 | 30.00th=[ 178], 40.00th=[ 203], 50.00th=[ 218], 60.00th=[ 275], 00:26:10.489 | 70.00th=[ 347], 80.00th=[ 456], 90.00th=[ 542], 95.00th=[ 575], 00:26:10.489 | 99.00th=[ 667], 99.50th=[ 743], 99.90th=[ 802], 99.95th=[ 810], 00:26:10.489 | 99.99th=[ 818] 00:26:10.489 bw ( KiB/s): min=28672, max=128000, per=6.32%, avg=56464.60, stdev=27345.97, samples=20 00:26:10.489 iops : min= 112, max= 500, avg=220.55, stdev=106.80, samples=20 00:26:10.489 lat (msec) : 20=0.18%, 50=0.93%, 100=7.49%, 250=48.17%, 500=29.26% 00:26:10.489 lat (msec) : 750=13.49%, 1000=0.48% 00:26:10.489 cpu : usr=0.69%, sys=0.79%, ctx=883, majf=0, minf=1 00:26:10.489 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:10.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.489 issued rwts: total=0,2269,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.489 job4: (groupid=0, jobs=1): err= 0: pid=302358: Tue Nov 19 03:07:20 2024 00:26:10.489 write: IOPS=356, BW=89.1MiB/s (93.5MB/s)(898MiB/10078msec); 0 zone resets 00:26:10.489 slat (usec): min=16, max=145809, avg=2248.72, stdev=6718.37 00:26:10.489 clat (usec): min=1058, max=678661, avg=177200.36, stdev=139276.93 00:26:10.489 lat (usec): min=1340, max=678733, avg=179449.09, stdev=141083.61 00:26:10.489 clat percentiles (msec): 00:26:10.489 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 16], 20.00th=[ 50], 00:26:10.489 | 30.00th=[ 79], 40.00th=[ 107], 50.00th=[ 155], 60.00th=[ 205], 00:26:10.489 | 70.00th=[ 251], 80.00th=[ 288], 90.00th=[ 334], 95.00th=[ 443], 00:26:10.489 | 99.00th=[ 609], 99.50th=[ 667], 99.90th=[ 676], 99.95th=[ 676], 00:26:10.489 | 99.99th=[ 676] 00:26:10.489 bw ( KiB/s): min=30720, max=239616, per=10.12%, avg=90361.45, stdev=57803.38, samples=20 00:26:10.489 iops : min= 120, max= 936, avg=352.95, stdev=225.81, samples=20 00:26:10.489 lat (msec) : 2=0.17%, 4=1.84%, 10=6.07%, 20=3.31%, 50=8.68% 00:26:10.489 lat (msec) : 100=18.31%, 250=31.31%, 500=26.89%, 750=3.42% 00:26:10.489 cpu : usr=0.98%, sys=1.24%, ctx=1901, majf=0, minf=1 00:26:10.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:26:10.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.489 issued rwts: total=0,3593,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.489 job5: (groupid=0, jobs=1): err= 0: pid=302359: Tue Nov 19 03:07:20 2024 00:26:10.489 write: IOPS=290, BW=72.7MiB/s (76.2MB/s)(749MiB/10309msec); 0 zone resets 00:26:10.489 slat (usec): min=17, max=132136, avg=2428.95, stdev=7172.42 00:26:10.489 clat (usec): min=676, max=710403, avg=217454.17, stdev=137011.11 00:26:10.489 lat (usec): min=709, max=710443, avg=219883.12, stdev=138483.98 00:26:10.489 clat percentiles (msec): 00:26:10.489 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 54], 20.00th=[ 110], 00:26:10.489 | 30.00th=[ 144], 40.00th=[ 178], 50.00th=[ 205], 60.00th=[ 224], 00:26:10.489 | 70.00th=[ 253], 80.00th=[ 292], 90.00th=[ 384], 95.00th=[ 542], 00:26:10.489 | 99.00th=[ 609], 99.50th=[ 659], 99.90th=[ 701], 99.95th=[ 709], 00:26:10.489 | 99.99th=[ 709] 00:26:10.489 bw ( KiB/s): min=26624, max=141824, per=8.41%, avg=75086.20, stdev=32208.98, samples=20 00:26:10.489 iops : min= 104, max= 554, avg=293.25, stdev=125.74, samples=20 00:26:10.489 lat (usec) : 750=0.03%, 1000=0.13% 00:26:10.489 lat (msec) : 2=0.73%, 4=1.57%, 10=1.74%, 20=1.67%, 50=3.84% 00:26:10.489 lat (msec) : 100=5.61%, 250=53.25%, 500=24.72%, 750=6.71% 00:26:10.489 cpu : usr=0.92%, sys=0.89%, ctx=1553, majf=0, minf=1 00:26:10.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:10.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.489 issued rwts: total=0,2997,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.489 job6: (groupid=0, jobs=1): err= 0: pid=302360: Tue Nov 19 03:07:20 2024 00:26:10.489 write: IOPS=450, BW=113MiB/s (118MB/s)(1131MiB/10047msec); 0 zone resets 00:26:10.490 slat (usec): min=13, max=177807, avg=1565.89, stdev=6187.47 00:26:10.490 clat (usec): min=1144, max=580504, avg=140023.52, stdev=133436.24 00:26:10.490 lat (usec): min=1408, max=589338, avg=141589.40, stdev=135034.37 00:26:10.490 clat percentiles (msec): 00:26:10.490 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 22], 20.00th=[ 39], 00:26:10.490 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 106], 00:26:10.490 | 70.00th=[ 194], 80.00th=[ 271], 90.00th=[ 326], 95.00th=[ 426], 00:26:10.490 | 99.00th=[ 542], 99.50th=[ 550], 99.90th=[ 575], 99.95th=[ 575], 00:26:10.490 | 99.99th=[ 584] 00:26:10.490 bw ( KiB/s): min=32768, max=284160, per=12.79%, avg=114221.40, stdev=84153.79, samples=20 00:26:10.490 iops : min= 128, max= 1110, avg=446.15, stdev=328.74, samples=20 00:26:10.490 lat (msec) : 2=0.09%, 4=0.62%, 10=4.09%, 20=4.13%, 50=17.06% 00:26:10.490 lat (msec) : 100=33.13%, 250=18.67%, 500=19.36%, 750=2.85% 00:26:10.490 cpu : usr=1.25%, sys=1.47%, ctx=2604, majf=0, minf=1 00:26:10.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:10.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.490 issued rwts: total=0,4525,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.490 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.490 job7: (groupid=0, jobs=1): err= 0: pid=302361: Tue Nov 19 03:07:20 2024 00:26:10.490 write: IOPS=356, BW=89.2MiB/s (93.5MB/s)(920MiB/10314msec); 0 zone resets 00:26:10.490 slat (usec): min=18, max=64283, avg=2436.45, stdev=6892.59 00:26:10.490 clat (msec): min=6, max=1090, avg=176.81, stdev=169.32 00:26:10.490 lat (msec): min=6, max=1090, avg=179.25, stdev=171.55 00:26:10.490 clat percentiles (msec): 00:26:10.490 | 1.00th=[ 18], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 61], 00:26:10.490 | 30.00th=[ 78], 40.00th=[ 94], 50.00th=[ 107], 60.00th=[ 122], 00:26:10.490 | 70.00th=[ 150], 80.00th=[ 296], 90.00th=[ 456], 95.00th=[ 535], 00:26:10.490 | 99.00th=[ 810], 99.50th=[ 835], 99.90th=[ 1036], 99.95th=[ 1083], 00:26:10.490 | 99.99th=[ 1083] 00:26:10.490 bw ( KiB/s): min=20480, max=264192, per=10.36%, avg=92575.35, stdev=72281.75, samples=20 00:26:10.490 iops : min= 80, max= 1032, avg=361.60, stdev=282.34, samples=20 00:26:10.490 lat (msec) : 10=0.08%, 20=1.03%, 50=2.64%, 100=41.47%, 250=31.98% 00:26:10.490 lat (msec) : 500=16.33%, 750=4.89%, 1000=1.41%, 2000=0.16% 00:26:10.490 cpu : usr=1.13%, sys=1.19%, ctx=1237, majf=0, minf=2 00:26:10.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:10.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.490 issued rwts: total=0,3680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.490 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.490 job8: (groupid=0, jobs=1): err= 0: pid=302362: Tue Nov 19 03:07:20 2024 00:26:10.490 write: IOPS=269, BW=67.4MiB/s (70.7MB/s)(695MiB/10302msec); 0 zone resets 00:26:10.490 slat (usec): min=19, max=180105, avg=2501.81, stdev=9440.13 00:26:10.490 clat (usec): min=1191, max=1272.8k, avg=234592.88, stdev=221982.67 00:26:10.490 lat (usec): min=1787, max=1272.8k, avg=237094.69, stdev=224918.48 00:26:10.490 clat percentiles (msec): 00:26:10.490 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 23], 20.00th=[ 40], 00:26:10.490 | 30.00th=[ 64], 40.00th=[ 95], 50.00th=[ 130], 60.00th=[ 205], 00:26:10.490 | 70.00th=[ 384], 80.00th=[ 477], 90.00th=[ 567], 95.00th=[ 625], 00:26:10.490 | 99.00th=[ 793], 99.50th=[ 860], 99.90th=[ 1267], 99.95th=[ 1267], 00:26:10.490 | 99.99th=[ 1267] 00:26:10.490 bw ( KiB/s): min=24576, max=204288, per=7.78%, avg=69520.30, stdev=51553.64, samples=20 00:26:10.490 iops : min= 96, max= 798, avg=271.55, stdev=201.37, samples=20 00:26:10.490 lat (msec) : 2=0.07%, 4=0.65%, 10=1.69%, 20=5.83%, 50=18.64% 00:26:10.490 lat (msec) : 100=14.68%, 250=20.98%, 500=21.05%, 750=15.11%, 1000=0.90% 00:26:10.490 lat (msec) : 2000=0.40% 00:26:10.490 cpu : usr=0.89%, sys=0.98%, ctx=1925, majf=0, minf=1 00:26:10.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:10.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.490 issued rwts: total=0,2779,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.490 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.490 job9: (groupid=0, jobs=1): err= 0: pid=302363: Tue Nov 19 03:07:20 2024 00:26:10.490 write: IOPS=410, BW=103MiB/s (108MB/s)(1058MiB/10309msec); 0 zone resets 00:26:10.490 slat (usec): min=17, max=63915, avg=1496.66, stdev=5799.63 00:26:10.490 clat (usec): min=720, max=1110.2k, avg=154288.59, stdev=156334.71 00:26:10.490 lat (usec): min=758, max=1110.3k, avg=155785.25, stdev=158082.33 00:26:10.490 clat percentiles (msec): 00:26:10.490 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 18], 20.00th=[ 43], 00:26:10.490 | 30.00th=[ 65], 40.00th=[ 74], 50.00th=[ 106], 60.00th=[ 133], 00:26:10.490 | 70.00th=[ 171], 80.00th=[ 243], 90.00th=[ 368], 95.00th=[ 489], 00:26:10.490 | 99.00th=[ 776], 99.50th=[ 827], 99.90th=[ 1070], 99.95th=[ 1099], 00:26:10.490 | 99.99th=[ 1116] 00:26:10.490 bw ( KiB/s): min=20992, max=297472, per=11.95%, avg=106725.90, stdev=77667.03, samples=20 00:26:10.490 iops : min= 82, max= 1162, avg=416.85, stdev=303.41, samples=20 00:26:10.490 lat (usec) : 750=0.02%, 1000=0.14% 00:26:10.490 lat (msec) : 2=0.71%, 4=1.51%, 10=5.13%, 20=3.02%, 50=11.51% 00:26:10.490 lat (msec) : 100=26.44%, 250=32.18%, 500=14.63%, 750=3.59%, 1000=0.87% 00:26:10.490 lat (msec) : 2000=0.24% 00:26:10.490 cpu : usr=1.22%, sys=1.41%, ctx=2835, majf=0, minf=1 00:26:10.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:10.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.490 issued rwts: total=0,4232,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.490 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.490 job10: (groupid=0, jobs=1): err= 0: pid=302364: Tue Nov 19 03:07:20 2024 00:26:10.490 write: IOPS=241, BW=60.5MiB/s (63.4MB/s)(620MiB/10259msec); 0 zone resets 00:26:10.490 slat (usec): min=16, max=117863, avg=2244.33, stdev=8006.84 00:26:10.490 clat (msec): min=2, max=817, avg=262.16, stdev=189.48 00:26:10.490 lat (msec): min=2, max=821, avg=264.41, stdev=191.54 00:26:10.490 clat percentiles (msec): 00:26:10.490 | 1.00th=[ 11], 5.00th=[ 25], 10.00th=[ 41], 20.00th=[ 78], 00:26:10.490 | 30.00th=[ 113], 40.00th=[ 178], 50.00th=[ 232], 60.00th=[ 288], 00:26:10.490 | 70.00th=[ 355], 80.00th=[ 460], 90.00th=[ 550], 95.00th=[ 584], 00:26:10.490 | 99.00th=[ 735], 99.50th=[ 776], 99.90th=[ 810], 99.95th=[ 818], 00:26:10.490 | 99.99th=[ 818] 00:26:10.490 bw ( KiB/s): min=27648, max=139264, per=6.93%, avg=61899.80, stdev=31870.17, samples=20 00:26:10.490 iops : min= 108, max= 544, avg=241.75, stdev=124.50, samples=20 00:26:10.490 lat (msec) : 4=0.08%, 10=0.89%, 20=2.46%, 50=10.24%, 100=14.31% 00:26:10.490 lat (msec) : 250=26.84%, 500=29.10%, 750=15.32%, 1000=0.77% 00:26:10.490 cpu : usr=0.64%, sys=0.91%, ctx=1650, majf=0, minf=1 00:26:10.490 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:10.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.490 issued rwts: total=0,2481,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.490 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.490 00:26:10.490 Run status group 0 (all jobs): 00:26:10.490 WRITE: bw=872MiB/s (915MB/s), 45.2MiB/s-121MiB/s (47.4MB/s-127MB/s), io=8997MiB (9434MB), run=10047-10314msec 00:26:10.490 00:26:10.490 Disk stats (read/write): 00:26:10.490 nvme0n1: ios=48/5054, merge=0/0, ticks=784/1227477, in_queue=1228261, util=100.00% 00:26:10.490 nvme10n1: ios=46/3652, merge=0/0, ticks=1373/1225837, in_queue=1227210, util=100.00% 00:26:10.490 nvme1n1: ios=0/9935, merge=0/0, ticks=0/1237875, in_queue=1237875, util=97.74% 00:26:10.490 nvme2n1: ios=0/4326, merge=0/0, ticks=0/1216851, in_queue=1216851, util=97.83% 00:26:10.490 nvme3n1: ios=20/6972, merge=0/0, ticks=225/1218921, in_queue=1219146, util=97.98% 00:26:10.490 nvme4n1: ios=36/5927, merge=0/0, ticks=1130/1219327, in_queue=1220457, util=100.00% 00:26:10.490 nvme5n1: ios=43/8782, merge=0/0, ticks=2134/1210197, in_queue=1212331, util=100.00% 00:26:10.490 nvme6n1: ios=0/7288, merge=0/0, ticks=0/1221414, in_queue=1221414, util=98.52% 00:26:10.491 nvme7n1: ios=41/5492, merge=0/0, ticks=767/1231544, in_queue=1232311, util=100.00% 00:26:10.491 nvme8n1: ios=0/8397, merge=0/0, ticks=0/1233098, in_queue=1233098, util=99.05% 00:26:10.491 nvme9n1: ios=42/4925, merge=0/0, ticks=1865/1245801, in_queue=1247666, util=100.00% 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:10.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:10.491 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.491 03:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:10.749 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:10.749 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:10.749 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:10.749 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:10.749 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:10.749 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:10.749 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:10.749 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:10.749 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:10.749 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.749 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:10.749 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.749 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.749 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:11.007 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:11.007 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.007 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:11.265 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:11.265 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:11.265 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:11.265 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:11.265 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:11.265 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:11.265 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:11.265 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:11.265 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:11.265 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.265 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.265 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.265 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.265 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:11.523 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:11.523 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:11.523 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:11.523 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:11.523 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:11.523 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:11.523 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:11.523 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:11.523 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:11.523 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.523 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.523 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.523 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.523 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:11.523 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:11.523 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:11.523 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:11.523 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:11.523 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:11.523 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:11.523 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:11.523 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:11.523 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:11.523 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.523 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.523 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.523 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.523 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:11.781 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:11.781 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.781 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:12.039 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:12.039 rmmod nvme_tcp 00:26:12.039 rmmod nvme_fabrics 00:26:12.039 rmmod nvme_keyring 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 297340 ']' 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 297340 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 297340 ']' 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 297340 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 297340 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 297340' 00:26:12.039 killing process with pid 297340 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 297340 00:26:12.039 03:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 297340 00:26:12.607 03:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:12.607 03:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:12.607 03:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:12.607 03:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:12.607 03:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:12.607 03:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:12.607 03:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:12.607 03:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:12.607 03:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:12.607 03:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.607 03:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.607 03:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.513 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:14.513 00:26:14.513 real 1m0.686s 00:26:14.513 user 3m33.467s 00:26:14.513 sys 0m15.492s 00:26:14.513 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:14.513 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.513 ************************************ 00:26:14.513 END TEST nvmf_multiconnection 00:26:14.513 ************************************ 00:26:14.513 03:07:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:14.513 03:07:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:14.513 03:07:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:14.513 03:07:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:14.513 ************************************ 00:26:14.513 START TEST nvmf_initiator_timeout 00:26:14.513 ************************************ 00:26:14.513 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:14.772 * Looking for test storage... 00:26:14.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:14.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.773 --rc genhtml_branch_coverage=1 00:26:14.773 --rc genhtml_function_coverage=1 00:26:14.773 --rc genhtml_legend=1 00:26:14.773 --rc geninfo_all_blocks=1 00:26:14.773 --rc geninfo_unexecuted_blocks=1 00:26:14.773 00:26:14.773 ' 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:14.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.773 --rc genhtml_branch_coverage=1 00:26:14.773 --rc genhtml_function_coverage=1 00:26:14.773 --rc genhtml_legend=1 00:26:14.773 --rc geninfo_all_blocks=1 00:26:14.773 --rc geninfo_unexecuted_blocks=1 00:26:14.773 00:26:14.773 ' 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:14.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.773 --rc genhtml_branch_coverage=1 00:26:14.773 --rc genhtml_function_coverage=1 00:26:14.773 --rc genhtml_legend=1 00:26:14.773 --rc geninfo_all_blocks=1 00:26:14.773 --rc geninfo_unexecuted_blocks=1 00:26:14.773 00:26:14.773 ' 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:14.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.773 --rc genhtml_branch_coverage=1 00:26:14.773 --rc genhtml_function_coverage=1 00:26:14.773 --rc genhtml_legend=1 00:26:14.773 --rc geninfo_all_blocks=1 00:26:14.773 --rc geninfo_unexecuted_blocks=1 00:26:14.773 00:26:14.773 ' 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.773 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:14.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:14.774 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.307 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:17.308 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:17.308 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:17.308 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:17.308 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:17.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:26:17.308 00:26:17.308 --- 10.0.0.2 ping statistics --- 00:26:17.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.308 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:26:17.308 00:26:17.308 --- 10.0.0.1 ping statistics --- 00:26:17.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.308 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=305541 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 305541 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 305541 ']' 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.308 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.309 [2024-11-19 03:07:27.589842] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:26:17.309 [2024-11-19 03:07:27.589929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.309 [2024-11-19 03:07:27.660738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:17.309 [2024-11-19 03:07:27.703737] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.309 [2024-11-19 03:07:27.703795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.309 [2024-11-19 03:07:27.703829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.309 [2024-11-19 03:07:27.703840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.309 [2024-11-19 03:07:27.703850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.309 [2024-11-19 03:07:27.705386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.309 [2024-11-19 03:07:27.705494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.309 [2024-11-19 03:07:27.705592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:17.309 [2024-11-19 03:07:27.705599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.309 Malloc0 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.309 Delay0 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.309 [2024-11-19 03:07:27.894196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.309 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.309 [2024-11-19 03:07:27.922465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.567 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.567 03:07:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:18.132 03:07:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:18.133 03:07:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:18.133 03:07:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.133 03:07:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:18.133 03:07:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:20.029 03:07:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:20.029 03:07:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:20.029 03:07:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:20.029 03:07:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:20.029 03:07:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:20.029 03:07:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:20.029 03:07:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=305958 00:26:20.029 03:07:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:20.029 03:07:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:20.029 [global] 00:26:20.029 thread=1 00:26:20.029 invalidate=1 00:26:20.029 rw=write 00:26:20.029 time_based=1 00:26:20.029 runtime=60 00:26:20.029 ioengine=libaio 00:26:20.029 direct=1 00:26:20.029 bs=4096 00:26:20.029 iodepth=1 00:26:20.029 norandommap=0 00:26:20.029 numjobs=1 00:26:20.029 00:26:20.029 verify_dump=1 00:26:20.029 verify_backlog=512 00:26:20.029 verify_state_save=0 00:26:20.029 do_verify=1 00:26:20.029 verify=crc32c-intel 00:26:20.286 [job0] 00:26:20.286 filename=/dev/nvme0n1 00:26:20.287 Could not set queue depth (nvme0n1) 00:26:20.287 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:20.287 fio-3.35 00:26:20.287 Starting 1 thread 00:26:23.565 03:07:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:23.565 03:07:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.565 03:07:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.565 true 00:26:23.565 03:07:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.565 03:07:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:23.565 03:07:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.565 03:07:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.565 true 00:26:23.565 03:07:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.565 03:07:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:23.565 03:07:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.565 03:07:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.565 true 00:26:23.565 03:07:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.565 03:07:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:23.565 03:07:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.565 03:07:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.565 true 00:26:23.565 03:07:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.565 03:07:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.094 true 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.094 true 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.094 true 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.094 true 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:26.094 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 305958 00:27:22.321 00:27:22.321 job0: (groupid=0, jobs=1): err= 0: pid=306038: Tue Nov 19 03:08:31 2024 00:27:22.322 read: IOPS=8, BW=35.3KiB/s (36.2kB/s)(2120KiB/60029msec) 00:27:22.322 slat (usec): min=4, max=10847, avg=33.44, stdev=470.65 00:27:22.322 clat (usec): min=218, max=41120k, avg=112769.24, stdev=1784672.58 00:27:22.322 lat (usec): min=224, max=41120k, avg=112802.68, stdev=1784671.94 00:27:22.322 clat percentiles (usec): 00:27:22.322 | 1.00th=[ 225], 5.00th=[ 239], 10.00th=[ 260], 00:27:22.322 | 20.00th=[ 41157], 30.00th=[ 41157], 40.00th=[ 41157], 00:27:22.322 | 50.00th=[ 41157], 60.00th=[ 41157], 70.00th=[ 41157], 00:27:22.322 | 80.00th=[ 42206], 90.00th=[ 42206], 95.00th=[ 42206], 00:27:22.322 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[17112761], 00:27:22.322 | 99.95th=[17112761], 99.99th=[17112761] 00:27:22.322 write: IOPS=17, BW=68.2KiB/s (69.9kB/s)(4096KiB/60029msec); 0 zone resets 00:27:22.322 slat (usec): min=6, max=28270, avg=35.66, stdev=883.20 00:27:22.322 clat (usec): min=163, max=536, avg=197.59, stdev=23.63 00:27:22.322 lat (usec): min=169, max=28524, avg=233.25, stdev=885.30 00:27:22.322 clat percentiles (usec): 00:27:22.322 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:27:22.322 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 198], 00:27:22.322 | 70.00th=[ 202], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 243], 00:27:22.322 | 99.00th=[ 306], 99.50th=[ 310], 99.90th=[ 416], 99.95th=[ 537], 00:27:22.322 | 99.99th=[ 537] 00:27:22.322 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:27:22.322 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:27:22.322 lat (usec) : 250=67.25%, 500=3.60%, 750=0.06% 00:27:22.322 lat (msec) : 50=29.02%, >=2000=0.06% 00:27:22.322 cpu : usr=0.01%, sys=0.03%, ctx=1557, majf=0, minf=1 00:27:22.322 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:22.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.322 issued rwts: total=530,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.322 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:22.322 00:27:22.322 Run status group 0 (all jobs): 00:27:22.322 READ: bw=35.3KiB/s (36.2kB/s), 35.3KiB/s-35.3KiB/s (36.2kB/s-36.2kB/s), io=2120KiB (2171kB), run=60029-60029msec 00:27:22.322 WRITE: bw=68.2KiB/s (69.9kB/s), 68.2KiB/s-68.2KiB/s (69.9kB/s-69.9kB/s), io=4096KiB (4194kB), run=60029-60029msec 00:27:22.322 00:27:22.322 Disk stats (read/write): 00:27:22.322 nvme0n1: ios=578/1024, merge=0/0, ticks=19834/197, in_queue=20031, util=99.74% 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:22.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:22.322 nvmf hotplug test: fio successful as expected 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:22.322 rmmod nvme_tcp 00:27:22.322 rmmod nvme_fabrics 00:27:22.322 rmmod nvme_keyring 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 305541 ']' 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 305541 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 305541 ']' 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 305541 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 305541 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 305541' 00:27:22.322 killing process with pid 305541 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 305541 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 305541 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:22.322 03:08:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.893 03:08:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:22.893 00:27:22.893 real 1m8.332s 00:27:22.893 user 4m11.248s 00:27:22.893 sys 0m6.335s 00:27:22.893 03:08:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:22.893 03:08:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:22.893 ************************************ 00:27:22.893 END TEST nvmf_initiator_timeout 00:27:22.893 ************************************ 00:27:22.893 03:08:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:22.893 03:08:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:22.893 03:08:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:22.893 03:08:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:22.893 03:08:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:25.428 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.428 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:25.429 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:25.429 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:25.429 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:25.429 ************************************ 00:27:25.429 START TEST nvmf_perf_adq 00:27:25.429 ************************************ 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:25.429 * Looking for test storage... 00:27:25.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:25.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.429 --rc genhtml_branch_coverage=1 00:27:25.429 --rc genhtml_function_coverage=1 00:27:25.429 --rc genhtml_legend=1 00:27:25.429 --rc geninfo_all_blocks=1 00:27:25.429 --rc geninfo_unexecuted_blocks=1 00:27:25.429 00:27:25.429 ' 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:25.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.429 --rc genhtml_branch_coverage=1 00:27:25.429 --rc genhtml_function_coverage=1 00:27:25.429 --rc genhtml_legend=1 00:27:25.429 --rc geninfo_all_blocks=1 00:27:25.429 --rc geninfo_unexecuted_blocks=1 00:27:25.429 00:27:25.429 ' 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:25.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.429 --rc genhtml_branch_coverage=1 00:27:25.429 --rc genhtml_function_coverage=1 00:27:25.429 --rc genhtml_legend=1 00:27:25.429 --rc geninfo_all_blocks=1 00:27:25.429 --rc geninfo_unexecuted_blocks=1 00:27:25.429 00:27:25.429 ' 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:25.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.429 --rc genhtml_branch_coverage=1 00:27:25.429 --rc genhtml_function_coverage=1 00:27:25.429 --rc genhtml_legend=1 00:27:25.429 --rc geninfo_all_blocks=1 00:27:25.429 --rc geninfo_unexecuted_blocks=1 00:27:25.429 00:27:25.429 ' 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:25.429 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:25.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:25.430 03:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.331 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:27.332 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:27.332 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:27.332 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:27.332 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:27.332 03:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:28.265 03:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:32.450 03:08:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:37.719 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:37.720 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:37.720 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:37.720 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:37.720 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:37.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:27:37.720 00:27:37.720 --- 10.0.0.2 ping statistics --- 00:27:37.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.720 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:37.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:27:37.720 00:27:37.720 --- 10.0.0.1 ping statistics --- 00:27:37.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.720 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=317839 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 317839 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 317839 ']' 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.720 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.721 [2024-11-19 03:08:47.471387] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:27:37.721 [2024-11-19 03:08:47.471469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.721 [2024-11-19 03:08:47.541840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:37.721 [2024-11-19 03:08:47.587174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.721 [2024-11-19 03:08:47.587243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.721 [2024-11-19 03:08:47.587265] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.721 [2024-11-19 03:08:47.587284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.721 [2024-11-19 03:08:47.587299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.721 [2024-11-19 03:08:47.588909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.721 [2024-11-19 03:08:47.588983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.721 [2024-11-19 03:08:47.589023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.721 [2024-11-19 03:08:47.589026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.721 [2024-11-19 03:08:47.864503] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.721 Malloc1 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.721 [2024-11-19 03:08:47.926594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=317871 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:37.721 03:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:39.635 03:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:39.635 03:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.635 03:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.635 03:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.635 03:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:39.635 "tick_rate": 2700000000, 00:27:39.635 "poll_groups": [ 00:27:39.635 { 00:27:39.635 "name": "nvmf_tgt_poll_group_000", 00:27:39.635 "admin_qpairs": 1, 00:27:39.635 "io_qpairs": 1, 00:27:39.635 "current_admin_qpairs": 1, 00:27:39.635 "current_io_qpairs": 1, 00:27:39.635 "pending_bdev_io": 0, 00:27:39.635 "completed_nvme_io": 19485, 00:27:39.635 "transports": [ 00:27:39.635 { 00:27:39.635 "trtype": "TCP" 00:27:39.635 } 00:27:39.635 ] 00:27:39.635 }, 00:27:39.635 { 00:27:39.635 "name": "nvmf_tgt_poll_group_001", 00:27:39.636 "admin_qpairs": 0, 00:27:39.636 "io_qpairs": 1, 00:27:39.636 "current_admin_qpairs": 0, 00:27:39.636 "current_io_qpairs": 1, 00:27:39.636 "pending_bdev_io": 0, 00:27:39.636 "completed_nvme_io": 19780, 00:27:39.636 "transports": [ 00:27:39.636 { 00:27:39.636 "trtype": "TCP" 00:27:39.636 } 00:27:39.636 ] 00:27:39.636 }, 00:27:39.636 { 00:27:39.636 "name": "nvmf_tgt_poll_group_002", 00:27:39.636 "admin_qpairs": 0, 00:27:39.636 "io_qpairs": 1, 00:27:39.636 "current_admin_qpairs": 0, 00:27:39.636 "current_io_qpairs": 1, 00:27:39.636 "pending_bdev_io": 0, 00:27:39.636 "completed_nvme_io": 20093, 00:27:39.636 "transports": [ 00:27:39.636 { 00:27:39.636 "trtype": "TCP" 00:27:39.636 } 00:27:39.636 ] 00:27:39.636 }, 00:27:39.636 { 00:27:39.636 "name": "nvmf_tgt_poll_group_003", 00:27:39.636 "admin_qpairs": 0, 00:27:39.636 "io_qpairs": 1, 00:27:39.636 "current_admin_qpairs": 0, 00:27:39.636 "current_io_qpairs": 1, 00:27:39.636 "pending_bdev_io": 0, 00:27:39.636 "completed_nvme_io": 19561, 00:27:39.636 "transports": [ 00:27:39.636 { 00:27:39.636 "trtype": "TCP" 00:27:39.636 } 00:27:39.636 ] 00:27:39.636 } 00:27:39.636 ] 00:27:39.636 }' 00:27:39.636 03:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:39.636 03:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:39.636 03:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:39.636 03:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:39.636 03:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 317871 00:27:47.747 Initializing NVMe Controllers 00:27:47.747 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:47.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:47.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:47.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:47.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:47.747 Initialization complete. Launching workers. 00:27:47.747 ======================================================== 00:27:47.747 Latency(us) 00:27:47.747 Device Information : IOPS MiB/s Average min max 00:27:47.747 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10282.00 40.16 6224.98 2604.56 10011.53 00:27:47.747 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10465.50 40.88 6117.03 2280.59 10289.21 00:27:47.747 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10502.20 41.02 6095.04 2175.06 10125.07 00:27:47.747 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10267.80 40.11 6233.93 2340.90 10394.96 00:27:47.747 ======================================================== 00:27:47.747 Total : 41517.50 162.18 6167.11 2175.06 10394.96 00:27:47.747 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:47.747 rmmod nvme_tcp 00:27:47.747 rmmod nvme_fabrics 00:27:47.747 rmmod nvme_keyring 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 317839 ']' 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 317839 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 317839 ']' 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 317839 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317839 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317839' 00:27:47.747 killing process with pid 317839 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 317839 00:27:47.747 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 317839 00:27:48.004 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:48.004 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:48.004 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:48.004 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:48.004 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:27:48.004 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:48.004 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:27:48.004 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:48.004 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:48.004 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.004 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.004 03:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.901 03:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:49.901 03:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:49.901 03:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:49.901 03:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:50.835 03:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:53.367 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.636 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:58.637 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:58.637 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:58.637 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:58.637 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:58.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:27:58.637 00:27:58.637 --- 10.0.0.2 ping statistics --- 00:27:58.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.637 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:27:58.637 00:27:58.637 --- 10.0.0.1 ping statistics --- 00:27:58.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.637 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:58.637 net.core.busy_poll = 1 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:58.637 net.core.busy_read = 1 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.637 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=321127 00:27:58.638 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:58.638 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 321127 00:27:58.638 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 321127 ']' 00:27:58.638 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.638 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.638 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.638 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.638 03:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.638 [2024-11-19 03:09:08.843062] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:27:58.638 [2024-11-19 03:09:08.843147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.638 [2024-11-19 03:09:08.922920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:58.638 [2024-11-19 03:09:08.973750] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.638 [2024-11-19 03:09:08.973803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.638 [2024-11-19 03:09:08.973827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.638 [2024-11-19 03:09:08.973838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.638 [2024-11-19 03:09:08.973849] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.638 [2024-11-19 03:09:08.975352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.638 [2024-11-19 03:09:08.975371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.638 [2024-11-19 03:09:08.975426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.638 [2024-11-19 03:09:08.975429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.638 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.896 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.896 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:58.896 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.896 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.896 [2024-11-19 03:09:09.257344] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.896 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.897 Malloc1 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.897 [2024-11-19 03:09:09.326629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=321275 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:27:58.897 03:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:00.799 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:00.799 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.799 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.799 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.799 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:00.799 "tick_rate": 2700000000, 00:28:00.799 "poll_groups": [ 00:28:00.799 { 00:28:00.799 "name": "nvmf_tgt_poll_group_000", 00:28:00.799 "admin_qpairs": 1, 00:28:00.799 "io_qpairs": 0, 00:28:00.799 "current_admin_qpairs": 1, 00:28:00.799 "current_io_qpairs": 0, 00:28:00.799 "pending_bdev_io": 0, 00:28:00.799 "completed_nvme_io": 0, 00:28:00.799 "transports": [ 00:28:00.799 { 00:28:00.799 "trtype": "TCP" 00:28:00.799 } 00:28:00.799 ] 00:28:00.799 }, 00:28:00.799 { 00:28:00.799 "name": "nvmf_tgt_poll_group_001", 00:28:00.799 "admin_qpairs": 0, 00:28:00.799 "io_qpairs": 4, 00:28:00.799 "current_admin_qpairs": 0, 00:28:00.799 "current_io_qpairs": 4, 00:28:00.799 "pending_bdev_io": 0, 00:28:00.799 "completed_nvme_io": 33247, 00:28:00.799 "transports": [ 00:28:00.799 { 00:28:00.799 "trtype": "TCP" 00:28:00.799 } 00:28:00.799 ] 00:28:00.799 }, 00:28:00.799 { 00:28:00.799 "name": "nvmf_tgt_poll_group_002", 00:28:00.799 "admin_qpairs": 0, 00:28:00.799 "io_qpairs": 0, 00:28:00.799 "current_admin_qpairs": 0, 00:28:00.799 "current_io_qpairs": 0, 00:28:00.799 "pending_bdev_io": 0, 00:28:00.799 "completed_nvme_io": 0, 00:28:00.799 "transports": [ 00:28:00.799 { 00:28:00.799 "trtype": "TCP" 00:28:00.799 } 00:28:00.799 ] 00:28:00.799 }, 00:28:00.799 { 00:28:00.799 "name": "nvmf_tgt_poll_group_003", 00:28:00.799 "admin_qpairs": 0, 00:28:00.799 "io_qpairs": 0, 00:28:00.799 "current_admin_qpairs": 0, 00:28:00.799 "current_io_qpairs": 0, 00:28:00.799 "pending_bdev_io": 0, 00:28:00.799 "completed_nvme_io": 0, 00:28:00.799 "transports": [ 00:28:00.799 { 00:28:00.799 "trtype": "TCP" 00:28:00.799 } 00:28:00.799 ] 00:28:00.799 } 00:28:00.799 ] 00:28:00.799 }' 00:28:00.799 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:00.799 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:00.799 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:28:00.799 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:28:00.799 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 321275 00:28:08.911 Initializing NVMe Controllers 00:28:08.911 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:08.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:08.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:08.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:08.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:08.911 Initialization complete. Launching workers. 00:28:08.911 ======================================================== 00:28:08.911 Latency(us) 00:28:08.911 Device Information : IOPS MiB/s Average min max 00:28:08.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4169.10 16.29 15356.12 1854.36 60247.49 00:28:08.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4976.50 19.44 12864.73 1631.88 63334.68 00:28:08.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4121.70 16.10 15534.48 1851.98 61213.73 00:28:08.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4272.10 16.69 14985.50 1881.77 60428.85 00:28:08.911 ======================================================== 00:28:08.911 Total : 17539.40 68.51 14600.87 1631.88 63334.68 00:28:08.911 00:28:08.911 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:08.911 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:08.911 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:08.911 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:08.911 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:08.911 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:08.911 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:08.911 rmmod nvme_tcp 00:28:08.911 rmmod nvme_fabrics 00:28:08.911 rmmod nvme_keyring 00:28:08.911 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:08.911 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:08.911 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 321127 ']' 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 321127 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 321127 ']' 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 321127 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 321127 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 321127' 00:28:09.170 killing process with pid 321127 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 321127 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 321127 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.170 03:09:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.455 03:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:12.455 03:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:12.455 00:28:12.455 real 0m47.153s 00:28:12.455 user 2m39.647s 00:28:12.455 sys 0m11.304s 00:28:12.455 03:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:12.455 03:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.455 ************************************ 00:28:12.455 END TEST nvmf_perf_adq 00:28:12.455 ************************************ 00:28:12.456 03:09:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:12.456 03:09:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:12.456 03:09:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:12.456 03:09:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:12.456 ************************************ 00:28:12.456 START TEST nvmf_shutdown 00:28:12.456 ************************************ 00:28:12.456 03:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:12.456 * Looking for test storage... 00:28:12.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:12.456 03:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:12.456 03:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:12.456 03:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:12.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.456 --rc genhtml_branch_coverage=1 00:28:12.456 --rc genhtml_function_coverage=1 00:28:12.456 --rc genhtml_legend=1 00:28:12.456 --rc geninfo_all_blocks=1 00:28:12.456 --rc geninfo_unexecuted_blocks=1 00:28:12.456 00:28:12.456 ' 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:12.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.456 --rc genhtml_branch_coverage=1 00:28:12.456 --rc genhtml_function_coverage=1 00:28:12.456 --rc genhtml_legend=1 00:28:12.456 --rc geninfo_all_blocks=1 00:28:12.456 --rc geninfo_unexecuted_blocks=1 00:28:12.456 00:28:12.456 ' 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:12.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.456 --rc genhtml_branch_coverage=1 00:28:12.456 --rc genhtml_function_coverage=1 00:28:12.456 --rc genhtml_legend=1 00:28:12.456 --rc geninfo_all_blocks=1 00:28:12.456 --rc geninfo_unexecuted_blocks=1 00:28:12.456 00:28:12.456 ' 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:12.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.456 --rc genhtml_branch_coverage=1 00:28:12.456 --rc genhtml_function_coverage=1 00:28:12.456 --rc genhtml_legend=1 00:28:12.456 --rc geninfo_all_blocks=1 00:28:12.456 --rc geninfo_unexecuted_blocks=1 00:28:12.456 00:28:12.456 ' 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:12.456 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:12.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:12.457 ************************************ 00:28:12.457 START TEST nvmf_shutdown_tc1 00:28:12.457 ************************************ 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:12.457 03:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:14.363 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:14.363 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:14.363 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:14.363 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.363 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.364 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:14.364 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:14.364 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.364 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.364 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:14.364 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:14.364 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.364 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.364 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.364 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.364 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:14.623 03:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:14.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:28:14.623 00:28:14.623 --- 10.0.0.2 ping statistics --- 00:28:14.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.623 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:28:14.623 00:28:14.623 --- 10.0.0.1 ping statistics --- 00:28:14.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.623 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=324577 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 324577 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 324577 ']' 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:14.623 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:14.623 [2024-11-19 03:09:25.119383] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:28:14.623 [2024-11-19 03:09:25.119467] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.623 [2024-11-19 03:09:25.193916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:14.882 [2024-11-19 03:09:25.244664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.882 [2024-11-19 03:09:25.244739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.882 [2024-11-19 03:09:25.244760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.882 [2024-11-19 03:09:25.244787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.882 [2024-11-19 03:09:25.244797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.882 [2024-11-19 03:09:25.246618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:14.882 [2024-11-19 03:09:25.248710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:14.882 [2024-11-19 03:09:25.248773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:14.882 [2024-11-19 03:09:25.248777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:14.882 [2024-11-19 03:09:25.397543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.882 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:14.882 Malloc1 00:28:15.141 [2024-11-19 03:09:25.506181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.141 Malloc2 00:28:15.141 Malloc3 00:28:15.141 Malloc4 00:28:15.141 Malloc5 00:28:15.141 Malloc6 00:28:15.401 Malloc7 00:28:15.401 Malloc8 00:28:15.401 Malloc9 00:28:15.401 Malloc10 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=324671 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 324671 /var/tmp/bdevperf.sock 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 324671 ']' 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:15.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:15.401 { 00:28:15.401 "params": { 00:28:15.401 "name": "Nvme$subsystem", 00:28:15.401 "trtype": "$TEST_TRANSPORT", 00:28:15.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.401 "adrfam": "ipv4", 00:28:15.401 "trsvcid": "$NVMF_PORT", 00:28:15.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.401 "hdgst": ${hdgst:-false}, 00:28:15.401 "ddgst": ${ddgst:-false} 00:28:15.401 }, 00:28:15.401 "method": "bdev_nvme_attach_controller" 00:28:15.401 } 00:28:15.401 EOF 00:28:15.401 )") 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:15.401 { 00:28:15.401 "params": { 00:28:15.401 "name": "Nvme$subsystem", 00:28:15.401 "trtype": "$TEST_TRANSPORT", 00:28:15.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.401 "adrfam": "ipv4", 00:28:15.401 "trsvcid": "$NVMF_PORT", 00:28:15.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.401 "hdgst": ${hdgst:-false}, 00:28:15.401 "ddgst": ${ddgst:-false} 00:28:15.401 }, 00:28:15.401 "method": "bdev_nvme_attach_controller" 00:28:15.401 } 00:28:15.401 EOF 00:28:15.401 )") 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:15.401 { 00:28:15.401 "params": { 00:28:15.401 "name": "Nvme$subsystem", 00:28:15.401 "trtype": "$TEST_TRANSPORT", 00:28:15.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.401 "adrfam": "ipv4", 00:28:15.401 "trsvcid": "$NVMF_PORT", 00:28:15.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.401 "hdgst": ${hdgst:-false}, 00:28:15.401 "ddgst": ${ddgst:-false} 00:28:15.401 }, 00:28:15.401 "method": "bdev_nvme_attach_controller" 00:28:15.401 } 00:28:15.401 EOF 00:28:15.401 )") 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:15.401 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:15.401 { 00:28:15.401 "params": { 00:28:15.401 "name": "Nvme$subsystem", 00:28:15.401 "trtype": "$TEST_TRANSPORT", 00:28:15.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.401 "adrfam": "ipv4", 00:28:15.401 "trsvcid": "$NVMF_PORT", 00:28:15.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.401 "hdgst": ${hdgst:-false}, 00:28:15.402 "ddgst": ${ddgst:-false} 00:28:15.402 }, 00:28:15.402 "method": "bdev_nvme_attach_controller" 00:28:15.402 } 00:28:15.402 EOF 00:28:15.402 )") 00:28:15.402 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:15.402 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:15.402 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:15.402 { 00:28:15.402 "params": { 00:28:15.402 "name": "Nvme$subsystem", 00:28:15.402 "trtype": "$TEST_TRANSPORT", 00:28:15.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.402 "adrfam": "ipv4", 00:28:15.402 "trsvcid": "$NVMF_PORT", 00:28:15.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.402 "hdgst": ${hdgst:-false}, 00:28:15.402 "ddgst": ${ddgst:-false} 00:28:15.402 }, 00:28:15.402 "method": "bdev_nvme_attach_controller" 00:28:15.402 } 00:28:15.402 EOF 00:28:15.402 )") 00:28:15.402 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:15.402 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:15.402 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:15.402 { 00:28:15.402 "params": { 00:28:15.402 "name": "Nvme$subsystem", 00:28:15.402 "trtype": "$TEST_TRANSPORT", 00:28:15.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.402 "adrfam": "ipv4", 00:28:15.402 "trsvcid": "$NVMF_PORT", 00:28:15.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.402 "hdgst": ${hdgst:-false}, 00:28:15.402 "ddgst": ${ddgst:-false} 00:28:15.402 }, 00:28:15.402 "method": "bdev_nvme_attach_controller" 00:28:15.402 } 00:28:15.402 EOF 00:28:15.402 )") 00:28:15.402 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:15.402 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:15.402 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:15.402 { 00:28:15.402 "params": { 00:28:15.402 "name": "Nvme$subsystem", 00:28:15.402 "trtype": "$TEST_TRANSPORT", 00:28:15.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.402 "adrfam": "ipv4", 00:28:15.402 "trsvcid": "$NVMF_PORT", 00:28:15.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.402 "hdgst": ${hdgst:-false}, 00:28:15.402 "ddgst": ${ddgst:-false} 00:28:15.402 }, 00:28:15.402 "method": "bdev_nvme_attach_controller" 00:28:15.402 } 00:28:15.402 EOF 00:28:15.402 )") 00:28:15.402 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:15.402 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:15.402 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:15.402 { 00:28:15.402 "params": { 00:28:15.402 "name": "Nvme$subsystem", 00:28:15.402 "trtype": "$TEST_TRANSPORT", 00:28:15.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.402 "adrfam": "ipv4", 00:28:15.402 "trsvcid": "$NVMF_PORT", 00:28:15.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.402 "hdgst": ${hdgst:-false}, 00:28:15.402 "ddgst": ${ddgst:-false} 00:28:15.402 }, 00:28:15.402 "method": "bdev_nvme_attach_controller" 00:28:15.402 } 00:28:15.402 EOF 00:28:15.402 )") 00:28:15.402 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:15.402 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:15.402 03:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:15.402 { 00:28:15.402 "params": { 00:28:15.402 "name": "Nvme$subsystem", 00:28:15.402 "trtype": "$TEST_TRANSPORT", 00:28:15.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.402 "adrfam": "ipv4", 00:28:15.402 "trsvcid": "$NVMF_PORT", 00:28:15.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.402 "hdgst": ${hdgst:-false}, 00:28:15.402 "ddgst": ${ddgst:-false} 00:28:15.402 }, 00:28:15.402 "method": "bdev_nvme_attach_controller" 00:28:15.402 } 00:28:15.402 EOF 00:28:15.402 )") 00:28:15.402 03:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:15.402 03:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:15.402 03:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:15.402 { 00:28:15.402 "params": { 00:28:15.402 "name": "Nvme$subsystem", 00:28:15.402 "trtype": "$TEST_TRANSPORT", 00:28:15.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.402 "adrfam": "ipv4", 00:28:15.402 "trsvcid": "$NVMF_PORT", 00:28:15.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.402 "hdgst": ${hdgst:-false}, 00:28:15.402 "ddgst": ${ddgst:-false} 00:28:15.402 }, 00:28:15.402 "method": "bdev_nvme_attach_controller" 00:28:15.402 } 00:28:15.402 EOF 00:28:15.402 )") 00:28:15.402 03:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:15.402 03:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:15.402 03:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:15.402 03:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:15.402 "params": { 00:28:15.402 "name": "Nvme1", 00:28:15.402 "trtype": "tcp", 00:28:15.402 "traddr": "10.0.0.2", 00:28:15.402 "adrfam": "ipv4", 00:28:15.402 "trsvcid": "4420", 00:28:15.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:15.402 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:15.402 "hdgst": false, 00:28:15.402 "ddgst": false 00:28:15.402 }, 00:28:15.402 "method": "bdev_nvme_attach_controller" 00:28:15.402 },{ 00:28:15.402 "params": { 00:28:15.402 "name": "Nvme2", 00:28:15.402 "trtype": "tcp", 00:28:15.402 "traddr": "10.0.0.2", 00:28:15.402 "adrfam": "ipv4", 00:28:15.402 "trsvcid": "4420", 00:28:15.402 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:15.402 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:15.402 "hdgst": false, 00:28:15.402 "ddgst": false 00:28:15.402 }, 00:28:15.402 "method": "bdev_nvme_attach_controller" 00:28:15.402 },{ 00:28:15.402 "params": { 00:28:15.402 "name": "Nvme3", 00:28:15.402 "trtype": "tcp", 00:28:15.402 "traddr": "10.0.0.2", 00:28:15.402 "adrfam": "ipv4", 00:28:15.402 "trsvcid": "4420", 00:28:15.402 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:15.402 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:15.402 "hdgst": false, 00:28:15.402 "ddgst": false 00:28:15.402 }, 00:28:15.402 "method": "bdev_nvme_attach_controller" 00:28:15.402 },{ 00:28:15.402 "params": { 00:28:15.402 "name": "Nvme4", 00:28:15.402 "trtype": "tcp", 00:28:15.402 "traddr": "10.0.0.2", 00:28:15.402 "adrfam": "ipv4", 00:28:15.402 "trsvcid": "4420", 00:28:15.402 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:15.402 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:15.402 "hdgst": false, 00:28:15.402 "ddgst": false 00:28:15.403 }, 00:28:15.403 "method": "bdev_nvme_attach_controller" 00:28:15.403 },{ 00:28:15.403 "params": { 00:28:15.403 "name": "Nvme5", 00:28:15.403 "trtype": "tcp", 00:28:15.403 "traddr": "10.0.0.2", 00:28:15.403 "adrfam": "ipv4", 00:28:15.403 "trsvcid": "4420", 00:28:15.403 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:15.403 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:15.403 "hdgst": false, 00:28:15.403 "ddgst": false 00:28:15.403 }, 00:28:15.403 "method": "bdev_nvme_attach_controller" 00:28:15.403 },{ 00:28:15.403 "params": { 00:28:15.403 "name": "Nvme6", 00:28:15.403 "trtype": "tcp", 00:28:15.403 "traddr": "10.0.0.2", 00:28:15.403 "adrfam": "ipv4", 00:28:15.403 "trsvcid": "4420", 00:28:15.403 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:15.403 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:15.403 "hdgst": false, 00:28:15.403 "ddgst": false 00:28:15.403 }, 00:28:15.403 "method": "bdev_nvme_attach_controller" 00:28:15.403 },{ 00:28:15.403 "params": { 00:28:15.403 "name": "Nvme7", 00:28:15.403 "trtype": "tcp", 00:28:15.403 "traddr": "10.0.0.2", 00:28:15.403 "adrfam": "ipv4", 00:28:15.403 "trsvcid": "4420", 00:28:15.403 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:15.403 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:15.403 "hdgst": false, 00:28:15.403 "ddgst": false 00:28:15.403 }, 00:28:15.403 "method": "bdev_nvme_attach_controller" 00:28:15.403 },{ 00:28:15.403 "params": { 00:28:15.403 "name": "Nvme8", 00:28:15.403 "trtype": "tcp", 00:28:15.403 "traddr": "10.0.0.2", 00:28:15.403 "adrfam": "ipv4", 00:28:15.403 "trsvcid": "4420", 00:28:15.403 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:15.403 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:15.403 "hdgst": false, 00:28:15.403 "ddgst": false 00:28:15.403 }, 00:28:15.403 "method": "bdev_nvme_attach_controller" 00:28:15.403 },{ 00:28:15.403 "params": { 00:28:15.403 "name": "Nvme9", 00:28:15.403 "trtype": "tcp", 00:28:15.403 "traddr": "10.0.0.2", 00:28:15.403 "adrfam": "ipv4", 00:28:15.403 "trsvcid": "4420", 00:28:15.403 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:15.403 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:15.403 "hdgst": false, 00:28:15.403 "ddgst": false 00:28:15.403 }, 00:28:15.403 "method": "bdev_nvme_attach_controller" 00:28:15.403 },{ 00:28:15.403 "params": { 00:28:15.403 "name": "Nvme10", 00:28:15.403 "trtype": "tcp", 00:28:15.403 "traddr": "10.0.0.2", 00:28:15.403 "adrfam": "ipv4", 00:28:15.403 "trsvcid": "4420", 00:28:15.403 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:15.403 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:15.403 "hdgst": false, 00:28:15.403 "ddgst": false 00:28:15.403 }, 00:28:15.403 "method": "bdev_nvme_attach_controller" 00:28:15.403 }' 00:28:15.403 [2024-11-19 03:09:26.016380] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:28:15.403 [2024-11-19 03:09:26.016471] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:15.662 [2024-11-19 03:09:26.091433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.662 [2024-11-19 03:09:26.138125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.561 03:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:17.561 03:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:17.561 03:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:17.561 03:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.561 03:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.561 03:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.561 03:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 324671 00:28:17.561 03:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:17.561 03:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:18.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 324671 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 324577 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.495 { 00:28:18.495 "params": { 00:28:18.495 "name": "Nvme$subsystem", 00:28:18.495 "trtype": "$TEST_TRANSPORT", 00:28:18.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.495 "adrfam": "ipv4", 00:28:18.495 "trsvcid": "$NVMF_PORT", 00:28:18.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.495 "hdgst": ${hdgst:-false}, 00:28:18.495 "ddgst": ${ddgst:-false} 00:28:18.495 }, 00:28:18.495 "method": "bdev_nvme_attach_controller" 00:28:18.495 } 00:28:18.495 EOF 00:28:18.495 )") 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.495 { 00:28:18.495 "params": { 00:28:18.495 "name": "Nvme$subsystem", 00:28:18.495 "trtype": "$TEST_TRANSPORT", 00:28:18.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.495 "adrfam": "ipv4", 00:28:18.495 "trsvcid": "$NVMF_PORT", 00:28:18.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.495 "hdgst": ${hdgst:-false}, 00:28:18.495 "ddgst": ${ddgst:-false} 00:28:18.495 }, 00:28:18.495 "method": "bdev_nvme_attach_controller" 00:28:18.495 } 00:28:18.495 EOF 00:28:18.495 )") 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.495 { 00:28:18.495 "params": { 00:28:18.495 "name": "Nvme$subsystem", 00:28:18.495 "trtype": "$TEST_TRANSPORT", 00:28:18.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.495 "adrfam": "ipv4", 00:28:18.495 "trsvcid": "$NVMF_PORT", 00:28:18.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.495 "hdgst": ${hdgst:-false}, 00:28:18.495 "ddgst": ${ddgst:-false} 00:28:18.495 }, 00:28:18.495 "method": "bdev_nvme_attach_controller" 00:28:18.495 } 00:28:18.495 EOF 00:28:18.495 )") 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.495 { 00:28:18.495 "params": { 00:28:18.495 "name": "Nvme$subsystem", 00:28:18.495 "trtype": "$TEST_TRANSPORT", 00:28:18.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.495 "adrfam": "ipv4", 00:28:18.495 "trsvcid": "$NVMF_PORT", 00:28:18.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.495 "hdgst": ${hdgst:-false}, 00:28:18.495 "ddgst": ${ddgst:-false} 00:28:18.495 }, 00:28:18.495 "method": "bdev_nvme_attach_controller" 00:28:18.495 } 00:28:18.495 EOF 00:28:18.495 )") 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.495 { 00:28:18.495 "params": { 00:28:18.495 "name": "Nvme$subsystem", 00:28:18.495 "trtype": "$TEST_TRANSPORT", 00:28:18.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.495 "adrfam": "ipv4", 00:28:18.495 "trsvcid": "$NVMF_PORT", 00:28:18.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.495 "hdgst": ${hdgst:-false}, 00:28:18.495 "ddgst": ${ddgst:-false} 00:28:18.495 }, 00:28:18.495 "method": "bdev_nvme_attach_controller" 00:28:18.495 } 00:28:18.495 EOF 00:28:18.495 )") 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.495 { 00:28:18.495 "params": { 00:28:18.495 "name": "Nvme$subsystem", 00:28:18.495 "trtype": "$TEST_TRANSPORT", 00:28:18.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.495 "adrfam": "ipv4", 00:28:18.495 "trsvcid": "$NVMF_PORT", 00:28:18.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.495 "hdgst": ${hdgst:-false}, 00:28:18.495 "ddgst": ${ddgst:-false} 00:28:18.495 }, 00:28:18.495 "method": "bdev_nvme_attach_controller" 00:28:18.495 } 00:28:18.495 EOF 00:28:18.495 )") 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.495 { 00:28:18.495 "params": { 00:28:18.495 "name": "Nvme$subsystem", 00:28:18.495 "trtype": "$TEST_TRANSPORT", 00:28:18.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.495 "adrfam": "ipv4", 00:28:18.495 "trsvcid": "$NVMF_PORT", 00:28:18.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.495 "hdgst": ${hdgst:-false}, 00:28:18.495 "ddgst": ${ddgst:-false} 00:28:18.495 }, 00:28:18.495 "method": "bdev_nvme_attach_controller" 00:28:18.495 } 00:28:18.495 EOF 00:28:18.495 )") 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.495 { 00:28:18.495 "params": { 00:28:18.495 "name": "Nvme$subsystem", 00:28:18.495 "trtype": "$TEST_TRANSPORT", 00:28:18.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.495 "adrfam": "ipv4", 00:28:18.495 "trsvcid": "$NVMF_PORT", 00:28:18.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.495 "hdgst": ${hdgst:-false}, 00:28:18.495 "ddgst": ${ddgst:-false} 00:28:18.495 }, 00:28:18.495 "method": "bdev_nvme_attach_controller" 00:28:18.495 } 00:28:18.495 EOF 00:28:18.495 )") 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.495 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.495 { 00:28:18.495 "params": { 00:28:18.496 "name": "Nvme$subsystem", 00:28:18.496 "trtype": "$TEST_TRANSPORT", 00:28:18.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.496 "adrfam": "ipv4", 00:28:18.496 "trsvcid": "$NVMF_PORT", 00:28:18.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.496 "hdgst": ${hdgst:-false}, 00:28:18.496 "ddgst": ${ddgst:-false} 00:28:18.496 }, 00:28:18.496 "method": "bdev_nvme_attach_controller" 00:28:18.496 } 00:28:18.496 EOF 00:28:18.496 )") 00:28:18.496 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.496 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.496 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.496 { 00:28:18.496 "params": { 00:28:18.496 "name": "Nvme$subsystem", 00:28:18.496 "trtype": "$TEST_TRANSPORT", 00:28:18.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.496 "adrfam": "ipv4", 00:28:18.496 "trsvcid": "$NVMF_PORT", 00:28:18.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.496 "hdgst": ${hdgst:-false}, 00:28:18.496 "ddgst": ${ddgst:-false} 00:28:18.496 }, 00:28:18.496 "method": "bdev_nvme_attach_controller" 00:28:18.496 } 00:28:18.496 EOF 00:28:18.496 )") 00:28:18.496 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.496 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:18.496 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:18.496 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:18.496 "params": { 00:28:18.496 "name": "Nvme1", 00:28:18.496 "trtype": "tcp", 00:28:18.496 "traddr": "10.0.0.2", 00:28:18.496 "adrfam": "ipv4", 00:28:18.496 "trsvcid": "4420", 00:28:18.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:18.496 "hdgst": false, 00:28:18.496 "ddgst": false 00:28:18.496 }, 00:28:18.496 "method": "bdev_nvme_attach_controller" 00:28:18.496 },{ 00:28:18.496 "params": { 00:28:18.496 "name": "Nvme2", 00:28:18.496 "trtype": "tcp", 00:28:18.496 "traddr": "10.0.0.2", 00:28:18.496 "adrfam": "ipv4", 00:28:18.496 "trsvcid": "4420", 00:28:18.496 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:18.496 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:18.496 "hdgst": false, 00:28:18.496 "ddgst": false 00:28:18.496 }, 00:28:18.496 "method": "bdev_nvme_attach_controller" 00:28:18.496 },{ 00:28:18.496 "params": { 00:28:18.496 "name": "Nvme3", 00:28:18.496 "trtype": "tcp", 00:28:18.496 "traddr": "10.0.0.2", 00:28:18.496 "adrfam": "ipv4", 00:28:18.496 "trsvcid": "4420", 00:28:18.496 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:18.496 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:18.496 "hdgst": false, 00:28:18.496 "ddgst": false 00:28:18.496 }, 00:28:18.496 "method": "bdev_nvme_attach_controller" 00:28:18.496 },{ 00:28:18.496 "params": { 00:28:18.496 "name": "Nvme4", 00:28:18.496 "trtype": "tcp", 00:28:18.496 "traddr": "10.0.0.2", 00:28:18.496 "adrfam": "ipv4", 00:28:18.496 "trsvcid": "4420", 00:28:18.496 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:18.496 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:18.496 "hdgst": false, 00:28:18.496 "ddgst": false 00:28:18.496 }, 00:28:18.496 "method": "bdev_nvme_attach_controller" 00:28:18.496 },{ 00:28:18.496 "params": { 00:28:18.496 "name": "Nvme5", 00:28:18.496 "trtype": "tcp", 00:28:18.496 "traddr": "10.0.0.2", 00:28:18.496 "adrfam": "ipv4", 00:28:18.496 "trsvcid": "4420", 00:28:18.496 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:18.496 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:18.496 "hdgst": false, 00:28:18.496 "ddgst": false 00:28:18.496 }, 00:28:18.496 "method": "bdev_nvme_attach_controller" 00:28:18.496 },{ 00:28:18.496 "params": { 00:28:18.496 "name": "Nvme6", 00:28:18.496 "trtype": "tcp", 00:28:18.496 "traddr": "10.0.0.2", 00:28:18.496 "adrfam": "ipv4", 00:28:18.496 "trsvcid": "4420", 00:28:18.496 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:18.496 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:18.496 "hdgst": false, 00:28:18.496 "ddgst": false 00:28:18.496 }, 00:28:18.496 "method": "bdev_nvme_attach_controller" 00:28:18.496 },{ 00:28:18.496 "params": { 00:28:18.496 "name": "Nvme7", 00:28:18.496 "trtype": "tcp", 00:28:18.496 "traddr": "10.0.0.2", 00:28:18.496 "adrfam": "ipv4", 00:28:18.496 "trsvcid": "4420", 00:28:18.496 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:18.496 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:18.496 "hdgst": false, 00:28:18.496 "ddgst": false 00:28:18.496 }, 00:28:18.496 "method": "bdev_nvme_attach_controller" 00:28:18.496 },{ 00:28:18.496 "params": { 00:28:18.496 "name": "Nvme8", 00:28:18.496 "trtype": "tcp", 00:28:18.496 "traddr": "10.0.0.2", 00:28:18.496 "adrfam": "ipv4", 00:28:18.496 "trsvcid": "4420", 00:28:18.496 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:18.496 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:18.496 "hdgst": false, 00:28:18.496 "ddgst": false 00:28:18.496 }, 00:28:18.496 "method": "bdev_nvme_attach_controller" 00:28:18.496 },{ 00:28:18.496 "params": { 00:28:18.496 "name": "Nvme9", 00:28:18.496 "trtype": "tcp", 00:28:18.496 "traddr": "10.0.0.2", 00:28:18.496 "adrfam": "ipv4", 00:28:18.496 "trsvcid": "4420", 00:28:18.496 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:18.496 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:18.496 "hdgst": false, 00:28:18.496 "ddgst": false 00:28:18.496 }, 00:28:18.496 "method": "bdev_nvme_attach_controller" 00:28:18.496 },{ 00:28:18.496 "params": { 00:28:18.496 "name": "Nvme10", 00:28:18.496 "trtype": "tcp", 00:28:18.496 "traddr": "10.0.0.2", 00:28:18.496 "adrfam": "ipv4", 00:28:18.496 "trsvcid": "4420", 00:28:18.496 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:18.496 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:18.496 "hdgst": false, 00:28:18.496 "ddgst": false 00:28:18.496 }, 00:28:18.496 "method": "bdev_nvme_attach_controller" 00:28:18.496 }' 00:28:18.496 [2024-11-19 03:09:29.079125] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:28:18.496 [2024-11-19 03:09:29.079210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325056 ] 00:28:18.766 [2024-11-19 03:09:29.153782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.766 [2024-11-19 03:09:29.201057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.143 Running I/O for 1 seconds... 00:28:21.337 1762.00 IOPS, 110.12 MiB/s 00:28:21.337 Latency(us) 00:28:21.337 [2024-11-19T02:09:31.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.337 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.337 Verification LBA range: start 0x0 length 0x400 00:28:21.337 Nvme1n1 : 1.14 225.17 14.07 0.00 0.00 281453.99 21554.06 257872.02 00:28:21.337 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.337 Verification LBA range: start 0x0 length 0x400 00:28:21.337 Nvme2n1 : 1.08 177.60 11.10 0.00 0.00 350665.13 26408.58 298261.62 00:28:21.337 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.337 Verification LBA range: start 0x0 length 0x400 00:28:21.337 Nvme3n1 : 1.10 257.31 16.08 0.00 0.00 233845.77 6456.51 253211.69 00:28:21.337 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.337 Verification LBA range: start 0x0 length 0x400 00:28:21.337 Nvme4n1 : 1.08 236.24 14.76 0.00 0.00 254306.80 17961.72 254765.13 00:28:21.337 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.337 Verification LBA range: start 0x0 length 0x400 00:28:21.337 Nvme5n1 : 1.10 233.06 14.57 0.00 0.00 253417.05 22622.06 254765.13 00:28:21.337 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.337 Verification LBA range: start 0x0 length 0x400 00:28:21.337 Nvme6n1 : 1.13 231.52 14.47 0.00 0.00 250090.59 2257.35 256318.58 00:28:21.337 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.337 Verification LBA range: start 0x0 length 0x400 00:28:21.337 Nvme7n1 : 1.19 268.24 16.76 0.00 0.00 214349.90 18544.26 253211.69 00:28:21.337 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.337 Verification LBA range: start 0x0 length 0x400 00:28:21.337 Nvme8n1 : 1.13 226.43 14.15 0.00 0.00 248026.83 17185.00 257872.02 00:28:21.337 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.337 Verification LBA range: start 0x0 length 0x400 00:28:21.337 Nvme9n1 : 1.20 266.71 16.67 0.00 0.00 208360.11 6990.51 260978.92 00:28:21.337 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.337 Verification LBA range: start 0x0 length 0x400 00:28:21.337 Nvme10n1 : 1.21 265.51 16.59 0.00 0.00 205954.58 5631.24 281173.71 00:28:21.337 [2024-11-19T02:09:31.952Z] =================================================================================================================== 00:28:21.337 [2024-11-19T02:09:31.952Z] Total : 2387.80 149.24 0.00 0.00 244664.55 2257.35 298261.62 00:28:21.595 03:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:21.595 03:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:21.595 03:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:21.595 03:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:21.595 03:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:21.595 03:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:21.595 03:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:21.595 03:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:21.595 03:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:21.595 03:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:21.595 03:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:21.595 rmmod nvme_tcp 00:28:21.595 rmmod nvme_fabrics 00:28:21.595 rmmod nvme_keyring 00:28:21.595 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:21.595 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:21.595 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:21.595 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 324577 ']' 00:28:21.595 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 324577 00:28:21.595 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 324577 ']' 00:28:21.595 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 324577 00:28:21.596 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:21.596 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:21.596 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 324577 00:28:21.596 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:21.596 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:21.596 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 324577' 00:28:21.596 killing process with pid 324577 00:28:21.596 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 324577 00:28:21.596 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 324577 00:28:22.163 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:22.163 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:22.163 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:22.163 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:22.163 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:22.163 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:22.163 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:22.163 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:22.163 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:22.163 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.163 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.163 03:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:24.066 00:28:24.066 real 0m11.561s 00:28:24.066 user 0m33.898s 00:28:24.066 sys 0m3.111s 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:24.066 ************************************ 00:28:24.066 END TEST nvmf_shutdown_tc1 00:28:24.066 ************************************ 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:24.066 ************************************ 00:28:24.066 START TEST nvmf_shutdown_tc2 00:28:24.066 ************************************ 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:24.066 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:24.325 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:24.325 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.325 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:24.326 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:24.326 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:24.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:28:24.326 00:28:24.326 --- 10.0.0.2 ping statistics --- 00:28:24.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.326 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:28:24.326 00:28:24.326 --- 10.0.0.1 ping statistics --- 00:28:24.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.326 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=325822 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 325822 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 325822 ']' 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:24.326 03:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.326 [2024-11-19 03:09:34.889209] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:28:24.326 [2024-11-19 03:09:34.889293] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.628 [2024-11-19 03:09:34.964091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:24.628 [2024-11-19 03:09:35.008797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.628 [2024-11-19 03:09:35.008858] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.628 [2024-11-19 03:09:35.008881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.628 [2024-11-19 03:09:35.008892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.628 [2024-11-19 03:09:35.008901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.628 [2024-11-19 03:09:35.010332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:24.628 [2024-11-19 03:09:35.010437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:24.628 [2024-11-19 03:09:35.010534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:24.628 [2024-11-19 03:09:35.010541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.628 [2024-11-19 03:09:35.147303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.628 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.983 Malloc1 00:28:24.983 [2024-11-19 03:09:35.241049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.983 Malloc2 00:28:24.983 Malloc3 00:28:24.983 Malloc4 00:28:24.983 Malloc5 00:28:24.983 Malloc6 00:28:24.983 Malloc7 00:28:24.983 Malloc8 00:28:25.246 Malloc9 00:28:25.246 Malloc10 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=326004 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 326004 /var/tmp/bdevperf.sock 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 326004 ']' 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:25.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.246 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:25.246 { 00:28:25.246 "params": { 00:28:25.246 "name": "Nvme$subsystem", 00:28:25.246 "trtype": "$TEST_TRANSPORT", 00:28:25.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.246 "adrfam": "ipv4", 00:28:25.246 "trsvcid": "$NVMF_PORT", 00:28:25.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.246 "hdgst": ${hdgst:-false}, 00:28:25.246 "ddgst": ${ddgst:-false} 00:28:25.246 }, 00:28:25.247 "method": "bdev_nvme_attach_controller" 00:28:25.247 } 00:28:25.247 EOF 00:28:25.247 )") 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:25.247 { 00:28:25.247 "params": { 00:28:25.247 "name": "Nvme$subsystem", 00:28:25.247 "trtype": "$TEST_TRANSPORT", 00:28:25.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.247 "adrfam": "ipv4", 00:28:25.247 "trsvcid": "$NVMF_PORT", 00:28:25.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.247 "hdgst": ${hdgst:-false}, 00:28:25.247 "ddgst": ${ddgst:-false} 00:28:25.247 }, 00:28:25.247 "method": "bdev_nvme_attach_controller" 00:28:25.247 } 00:28:25.247 EOF 00:28:25.247 )") 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:25.247 { 00:28:25.247 "params": { 00:28:25.247 "name": "Nvme$subsystem", 00:28:25.247 "trtype": "$TEST_TRANSPORT", 00:28:25.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.247 "adrfam": "ipv4", 00:28:25.247 "trsvcid": "$NVMF_PORT", 00:28:25.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.247 "hdgst": ${hdgst:-false}, 00:28:25.247 "ddgst": ${ddgst:-false} 00:28:25.247 }, 00:28:25.247 "method": "bdev_nvme_attach_controller" 00:28:25.247 } 00:28:25.247 EOF 00:28:25.247 )") 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:25.247 { 00:28:25.247 "params": { 00:28:25.247 "name": "Nvme$subsystem", 00:28:25.247 "trtype": "$TEST_TRANSPORT", 00:28:25.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.247 "adrfam": "ipv4", 00:28:25.247 "trsvcid": "$NVMF_PORT", 00:28:25.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.247 "hdgst": ${hdgst:-false}, 00:28:25.247 "ddgst": ${ddgst:-false} 00:28:25.247 }, 00:28:25.247 "method": "bdev_nvme_attach_controller" 00:28:25.247 } 00:28:25.247 EOF 00:28:25.247 )") 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:25.247 { 00:28:25.247 "params": { 00:28:25.247 "name": "Nvme$subsystem", 00:28:25.247 "trtype": "$TEST_TRANSPORT", 00:28:25.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.247 "adrfam": "ipv4", 00:28:25.247 "trsvcid": "$NVMF_PORT", 00:28:25.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.247 "hdgst": ${hdgst:-false}, 00:28:25.247 "ddgst": ${ddgst:-false} 00:28:25.247 }, 00:28:25.247 "method": "bdev_nvme_attach_controller" 00:28:25.247 } 00:28:25.247 EOF 00:28:25.247 )") 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:25.247 { 00:28:25.247 "params": { 00:28:25.247 "name": "Nvme$subsystem", 00:28:25.247 "trtype": "$TEST_TRANSPORT", 00:28:25.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.247 "adrfam": "ipv4", 00:28:25.247 "trsvcid": "$NVMF_PORT", 00:28:25.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.247 "hdgst": ${hdgst:-false}, 00:28:25.247 "ddgst": ${ddgst:-false} 00:28:25.247 }, 00:28:25.247 "method": "bdev_nvme_attach_controller" 00:28:25.247 } 00:28:25.247 EOF 00:28:25.247 )") 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:25.247 { 00:28:25.247 "params": { 00:28:25.247 "name": "Nvme$subsystem", 00:28:25.247 "trtype": "$TEST_TRANSPORT", 00:28:25.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.247 "adrfam": "ipv4", 00:28:25.247 "trsvcid": "$NVMF_PORT", 00:28:25.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.247 "hdgst": ${hdgst:-false}, 00:28:25.247 "ddgst": ${ddgst:-false} 00:28:25.247 }, 00:28:25.247 "method": "bdev_nvme_attach_controller" 00:28:25.247 } 00:28:25.247 EOF 00:28:25.247 )") 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:25.247 { 00:28:25.247 "params": { 00:28:25.247 "name": "Nvme$subsystem", 00:28:25.247 "trtype": "$TEST_TRANSPORT", 00:28:25.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.247 "adrfam": "ipv4", 00:28:25.247 "trsvcid": "$NVMF_PORT", 00:28:25.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.247 "hdgst": ${hdgst:-false}, 00:28:25.247 "ddgst": ${ddgst:-false} 00:28:25.247 }, 00:28:25.247 "method": "bdev_nvme_attach_controller" 00:28:25.247 } 00:28:25.247 EOF 00:28:25.247 )") 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:25.247 { 00:28:25.247 "params": { 00:28:25.247 "name": "Nvme$subsystem", 00:28:25.247 "trtype": "$TEST_TRANSPORT", 00:28:25.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.247 "adrfam": "ipv4", 00:28:25.247 "trsvcid": "$NVMF_PORT", 00:28:25.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.247 "hdgst": ${hdgst:-false}, 00:28:25.247 "ddgst": ${ddgst:-false} 00:28:25.247 }, 00:28:25.247 "method": "bdev_nvme_attach_controller" 00:28:25.247 } 00:28:25.247 EOF 00:28:25.247 )") 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:25.247 { 00:28:25.247 "params": { 00:28:25.247 "name": "Nvme$subsystem", 00:28:25.247 "trtype": "$TEST_TRANSPORT", 00:28:25.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.247 "adrfam": "ipv4", 00:28:25.247 "trsvcid": "$NVMF_PORT", 00:28:25.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.247 "hdgst": ${hdgst:-false}, 00:28:25.247 "ddgst": ${ddgst:-false} 00:28:25.247 }, 00:28:25.247 "method": "bdev_nvme_attach_controller" 00:28:25.247 } 00:28:25.247 EOF 00:28:25.247 )") 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:25.247 03:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:25.247 "params": { 00:28:25.247 "name": "Nvme1", 00:28:25.247 "trtype": "tcp", 00:28:25.247 "traddr": "10.0.0.2", 00:28:25.247 "adrfam": "ipv4", 00:28:25.247 "trsvcid": "4420", 00:28:25.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:25.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:25.247 "hdgst": false, 00:28:25.247 "ddgst": false 00:28:25.247 }, 00:28:25.247 "method": "bdev_nvme_attach_controller" 00:28:25.247 },{ 00:28:25.247 "params": { 00:28:25.247 "name": "Nvme2", 00:28:25.247 "trtype": "tcp", 00:28:25.247 "traddr": "10.0.0.2", 00:28:25.247 "adrfam": "ipv4", 00:28:25.247 "trsvcid": "4420", 00:28:25.247 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:25.247 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:25.247 "hdgst": false, 00:28:25.247 "ddgst": false 00:28:25.247 }, 00:28:25.247 "method": "bdev_nvme_attach_controller" 00:28:25.247 },{ 00:28:25.247 "params": { 00:28:25.247 "name": "Nvme3", 00:28:25.247 "trtype": "tcp", 00:28:25.247 "traddr": "10.0.0.2", 00:28:25.248 "adrfam": "ipv4", 00:28:25.248 "trsvcid": "4420", 00:28:25.248 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:25.248 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:25.248 "hdgst": false, 00:28:25.248 "ddgst": false 00:28:25.248 }, 00:28:25.248 "method": "bdev_nvme_attach_controller" 00:28:25.248 },{ 00:28:25.248 "params": { 00:28:25.248 "name": "Nvme4", 00:28:25.248 "trtype": "tcp", 00:28:25.248 "traddr": "10.0.0.2", 00:28:25.248 "adrfam": "ipv4", 00:28:25.248 "trsvcid": "4420", 00:28:25.248 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:25.248 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:25.248 "hdgst": false, 00:28:25.248 "ddgst": false 00:28:25.248 }, 00:28:25.248 "method": "bdev_nvme_attach_controller" 00:28:25.248 },{ 00:28:25.248 "params": { 00:28:25.248 "name": "Nvme5", 00:28:25.248 "trtype": "tcp", 00:28:25.248 "traddr": "10.0.0.2", 00:28:25.248 "adrfam": "ipv4", 00:28:25.248 "trsvcid": "4420", 00:28:25.248 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:25.248 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:25.248 "hdgst": false, 00:28:25.248 "ddgst": false 00:28:25.248 }, 00:28:25.248 "method": "bdev_nvme_attach_controller" 00:28:25.248 },{ 00:28:25.248 "params": { 00:28:25.248 "name": "Nvme6", 00:28:25.248 "trtype": "tcp", 00:28:25.248 "traddr": "10.0.0.2", 00:28:25.248 "adrfam": "ipv4", 00:28:25.248 "trsvcid": "4420", 00:28:25.248 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:25.248 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:25.248 "hdgst": false, 00:28:25.248 "ddgst": false 00:28:25.248 }, 00:28:25.248 "method": "bdev_nvme_attach_controller" 00:28:25.248 },{ 00:28:25.248 "params": { 00:28:25.248 "name": "Nvme7", 00:28:25.248 "trtype": "tcp", 00:28:25.248 "traddr": "10.0.0.2", 00:28:25.248 "adrfam": "ipv4", 00:28:25.248 "trsvcid": "4420", 00:28:25.248 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:25.248 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:25.248 "hdgst": false, 00:28:25.248 "ddgst": false 00:28:25.248 }, 00:28:25.248 "method": "bdev_nvme_attach_controller" 00:28:25.248 },{ 00:28:25.248 "params": { 00:28:25.248 "name": "Nvme8", 00:28:25.248 "trtype": "tcp", 00:28:25.248 "traddr": "10.0.0.2", 00:28:25.248 "adrfam": "ipv4", 00:28:25.248 "trsvcid": "4420", 00:28:25.248 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:25.248 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:25.248 "hdgst": false, 00:28:25.248 "ddgst": false 00:28:25.248 }, 00:28:25.248 "method": "bdev_nvme_attach_controller" 00:28:25.248 },{ 00:28:25.248 "params": { 00:28:25.248 "name": "Nvme9", 00:28:25.248 "trtype": "tcp", 00:28:25.248 "traddr": "10.0.0.2", 00:28:25.248 "adrfam": "ipv4", 00:28:25.248 "trsvcid": "4420", 00:28:25.248 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:25.248 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:25.248 "hdgst": false, 00:28:25.248 "ddgst": false 00:28:25.248 }, 00:28:25.248 "method": "bdev_nvme_attach_controller" 00:28:25.248 },{ 00:28:25.248 "params": { 00:28:25.248 "name": "Nvme10", 00:28:25.248 "trtype": "tcp", 00:28:25.248 "traddr": "10.0.0.2", 00:28:25.248 "adrfam": "ipv4", 00:28:25.248 "trsvcid": "4420", 00:28:25.248 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:25.248 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:25.248 "hdgst": false, 00:28:25.248 "ddgst": false 00:28:25.248 }, 00:28:25.248 "method": "bdev_nvme_attach_controller" 00:28:25.248 }' 00:28:25.248 [2024-11-19 03:09:35.744126] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:28:25.248 [2024-11-19 03:09:35.744215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326004 ] 00:28:25.248 [2024-11-19 03:09:35.815855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.248 [2024-11-19 03:09:35.862530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.146 Running I/O for 10 seconds... 00:28:27.146 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:27.146 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:27.146 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:27.146 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.146 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.146 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.146 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:27.146 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:27.146 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:27.146 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:27.146 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:27.146 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:27.146 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:27.146 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:27.146 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:27.146 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.146 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.404 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.404 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:27.404 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:27.404 03:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:27.663 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:27.663 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:27.663 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:27.663 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:27.663 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.663 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.663 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.663 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:27.663 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:27.663 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=140 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 140 -ge 100 ']' 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 326004 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 326004 ']' 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 326004 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 326004 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 326004' 00:28:27.922 killing process with pid 326004 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 326004 00:28:27.922 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 326004 00:28:27.922 Received shutdown signal, test time was about 0.978603 seconds 00:28:27.922 00:28:27.922 Latency(us) 00:28:27.922 [2024-11-19T02:09:38.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.922 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.922 Verification LBA range: start 0x0 length 0x400 00:28:27.922 Nvme1n1 : 0.97 264.11 16.51 0.00 0.00 239564.80 19612.25 251658.24 00:28:27.922 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.922 Verification LBA range: start 0x0 length 0x400 00:28:27.922 Nvme2n1 : 0.97 262.96 16.44 0.00 0.00 235646.48 17282.09 256318.58 00:28:27.922 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.922 Verification LBA range: start 0x0 length 0x400 00:28:27.922 Nvme3n1 : 0.97 273.20 17.08 0.00 0.00 222417.40 3252.53 253211.69 00:28:27.922 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.922 Verification LBA range: start 0x0 length 0x400 00:28:27.922 Nvme4n1 : 0.96 267.31 16.71 0.00 0.00 223496.72 18641.35 253211.69 00:28:27.922 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.922 Verification LBA range: start 0x0 length 0x400 00:28:27.922 Nvme5n1 : 0.93 205.94 12.87 0.00 0.00 283711.72 27379.48 251658.24 00:28:27.922 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.922 Verification LBA range: start 0x0 length 0x400 00:28:27.922 Nvme6n1 : 0.93 207.03 12.94 0.00 0.00 276106.56 19612.25 256318.58 00:28:27.922 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.922 Verification LBA range: start 0x0 length 0x400 00:28:27.922 Nvme7n1 : 0.95 202.96 12.69 0.00 0.00 276488.60 21359.88 254765.13 00:28:27.922 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.922 Verification LBA range: start 0x0 length 0x400 00:28:27.922 Nvme8n1 : 0.98 261.81 16.36 0.00 0.00 210926.36 18932.62 234570.33 00:28:27.922 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.922 Verification LBA range: start 0x0 length 0x400 00:28:27.922 Nvme9n1 : 0.96 200.00 12.50 0.00 0.00 269295.57 20583.16 285834.05 00:28:27.922 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.922 Verification LBA range: start 0x0 length 0x400 00:28:27.922 Nvme10n1 : 0.95 201.49 12.59 0.00 0.00 261532.13 21068.61 259425.47 00:28:27.922 [2024-11-19T02:09:38.537Z] =================================================================================================================== 00:28:27.923 [2024-11-19T02:09:38.538Z] Total : 2346.82 146.68 0.00 0.00 246474.39 3252.53 285834.05 00:28:28.181 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:29.113 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 325822 00:28:29.113 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:29.113 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:29.113 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:29.113 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:29.113 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:29.113 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:29.113 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:29.113 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:29.113 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:29.113 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:29.113 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:29.371 rmmod nvme_tcp 00:28:29.371 rmmod nvme_fabrics 00:28:29.371 rmmod nvme_keyring 00:28:29.371 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:29.371 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:29.371 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:29.371 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 325822 ']' 00:28:29.371 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 325822 00:28:29.371 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 325822 ']' 00:28:29.371 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 325822 00:28:29.371 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:29.371 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:29.371 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 325822 00:28:29.371 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:29.371 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:29.371 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 325822' 00:28:29.371 killing process with pid 325822 00:28:29.371 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 325822 00:28:29.371 03:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 325822 00:28:29.939 03:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:29.939 03:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:29.939 03:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:29.939 03:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:29.939 03:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:29.939 03:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:29.939 03:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:29.939 03:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:29.939 03:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:29.939 03:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.939 03:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.939 03:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:31.847 00:28:31.847 real 0m7.684s 00:28:31.847 user 0m23.635s 00:28:31.847 sys 0m1.499s 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:31.847 ************************************ 00:28:31.847 END TEST nvmf_shutdown_tc2 00:28:31.847 ************************************ 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:31.847 ************************************ 00:28:31.847 START TEST nvmf_shutdown_tc3 00:28:31.847 ************************************ 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:31.847 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:31.848 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:31.848 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:31.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:31.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:31.848 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:32.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:28:32.108 00:28:32.108 --- 10.0.0.2 ping statistics --- 00:28:32.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.108 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:28:32.108 00:28:32.108 --- 10.0.0.1 ping statistics --- 00:28:32.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.108 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=326920 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 326920 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 326920 ']' 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.108 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.108 [2024-11-19 03:09:42.682705] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:28:32.108 [2024-11-19 03:09:42.682807] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.367 [2024-11-19 03:09:42.754608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:32.367 [2024-11-19 03:09:42.801131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.367 [2024-11-19 03:09:42.801186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.367 [2024-11-19 03:09:42.801210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.367 [2024-11-19 03:09:42.801221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.367 [2024-11-19 03:09:42.801231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.367 [2024-11-19 03:09:42.802725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:32.367 [2024-11-19 03:09:42.802824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:32.367 [2024-11-19 03:09:42.802897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.367 [2024-11-19 03:09:42.802894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.367 [2024-11-19 03:09:42.939383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.367 03:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.625 Malloc1 00:28:32.625 [2024-11-19 03:09:43.029482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.625 Malloc2 00:28:32.625 Malloc3 00:28:32.625 Malloc4 00:28:32.625 Malloc5 00:28:32.625 Malloc6 00:28:32.884 Malloc7 00:28:32.884 Malloc8 00:28:32.884 Malloc9 00:28:32.884 Malloc10 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=327097 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 327097 /var/tmp/bdevperf.sock 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 327097 ']' 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:32.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:32.884 { 00:28:32.884 "params": { 00:28:32.884 "name": "Nvme$subsystem", 00:28:32.884 "trtype": "$TEST_TRANSPORT", 00:28:32.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.884 "adrfam": "ipv4", 00:28:32.884 "trsvcid": "$NVMF_PORT", 00:28:32.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.884 "hdgst": ${hdgst:-false}, 00:28:32.884 "ddgst": ${ddgst:-false} 00:28:32.884 }, 00:28:32.884 "method": "bdev_nvme_attach_controller" 00:28:32.884 } 00:28:32.884 EOF 00:28:32.884 )") 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:32.884 { 00:28:32.884 "params": { 00:28:32.884 "name": "Nvme$subsystem", 00:28:32.884 "trtype": "$TEST_TRANSPORT", 00:28:32.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.884 "adrfam": "ipv4", 00:28:32.884 "trsvcid": "$NVMF_PORT", 00:28:32.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.884 "hdgst": ${hdgst:-false}, 00:28:32.884 "ddgst": ${ddgst:-false} 00:28:32.884 }, 00:28:32.884 "method": "bdev_nvme_attach_controller" 00:28:32.884 } 00:28:32.884 EOF 00:28:32.884 )") 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:32.884 { 00:28:32.884 "params": { 00:28:32.884 "name": "Nvme$subsystem", 00:28:32.884 "trtype": "$TEST_TRANSPORT", 00:28:32.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.884 "adrfam": "ipv4", 00:28:32.884 "trsvcid": "$NVMF_PORT", 00:28:32.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.884 "hdgst": ${hdgst:-false}, 00:28:32.884 "ddgst": ${ddgst:-false} 00:28:32.884 }, 00:28:32.884 "method": "bdev_nvme_attach_controller" 00:28:32.884 } 00:28:32.884 EOF 00:28:32.884 )") 00:28:32.884 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.144 { 00:28:33.144 "params": { 00:28:33.144 "name": "Nvme$subsystem", 00:28:33.144 "trtype": "$TEST_TRANSPORT", 00:28:33.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.144 "adrfam": "ipv4", 00:28:33.144 "trsvcid": "$NVMF_PORT", 00:28:33.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.144 "hdgst": ${hdgst:-false}, 00:28:33.144 "ddgst": ${ddgst:-false} 00:28:33.144 }, 00:28:33.144 "method": "bdev_nvme_attach_controller" 00:28:33.144 } 00:28:33.144 EOF 00:28:33.144 )") 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.144 { 00:28:33.144 "params": { 00:28:33.144 "name": "Nvme$subsystem", 00:28:33.144 "trtype": "$TEST_TRANSPORT", 00:28:33.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.144 "adrfam": "ipv4", 00:28:33.144 "trsvcid": "$NVMF_PORT", 00:28:33.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.144 "hdgst": ${hdgst:-false}, 00:28:33.144 "ddgst": ${ddgst:-false} 00:28:33.144 }, 00:28:33.144 "method": "bdev_nvme_attach_controller" 00:28:33.144 } 00:28:33.144 EOF 00:28:33.144 )") 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.144 { 00:28:33.144 "params": { 00:28:33.144 "name": "Nvme$subsystem", 00:28:33.144 "trtype": "$TEST_TRANSPORT", 00:28:33.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.144 "adrfam": "ipv4", 00:28:33.144 "trsvcid": "$NVMF_PORT", 00:28:33.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.144 "hdgst": ${hdgst:-false}, 00:28:33.144 "ddgst": ${ddgst:-false} 00:28:33.144 }, 00:28:33.144 "method": "bdev_nvme_attach_controller" 00:28:33.144 } 00:28:33.144 EOF 00:28:33.144 )") 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.144 { 00:28:33.144 "params": { 00:28:33.144 "name": "Nvme$subsystem", 00:28:33.144 "trtype": "$TEST_TRANSPORT", 00:28:33.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.144 "adrfam": "ipv4", 00:28:33.144 "trsvcid": "$NVMF_PORT", 00:28:33.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.144 "hdgst": ${hdgst:-false}, 00:28:33.144 "ddgst": ${ddgst:-false} 00:28:33.144 }, 00:28:33.144 "method": "bdev_nvme_attach_controller" 00:28:33.144 } 00:28:33.144 EOF 00:28:33.144 )") 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.144 { 00:28:33.144 "params": { 00:28:33.144 "name": "Nvme$subsystem", 00:28:33.144 "trtype": "$TEST_TRANSPORT", 00:28:33.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.144 "adrfam": "ipv4", 00:28:33.144 "trsvcid": "$NVMF_PORT", 00:28:33.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.144 "hdgst": ${hdgst:-false}, 00:28:33.144 "ddgst": ${ddgst:-false} 00:28:33.144 }, 00:28:33.144 "method": "bdev_nvme_attach_controller" 00:28:33.144 } 00:28:33.144 EOF 00:28:33.144 )") 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.144 { 00:28:33.144 "params": { 00:28:33.144 "name": "Nvme$subsystem", 00:28:33.144 "trtype": "$TEST_TRANSPORT", 00:28:33.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.144 "adrfam": "ipv4", 00:28:33.144 "trsvcid": "$NVMF_PORT", 00:28:33.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.144 "hdgst": ${hdgst:-false}, 00:28:33.144 "ddgst": ${ddgst:-false} 00:28:33.144 }, 00:28:33.144 "method": "bdev_nvme_attach_controller" 00:28:33.144 } 00:28:33.144 EOF 00:28:33.144 )") 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.144 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.144 { 00:28:33.144 "params": { 00:28:33.144 "name": "Nvme$subsystem", 00:28:33.144 "trtype": "$TEST_TRANSPORT", 00:28:33.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.145 "adrfam": "ipv4", 00:28:33.145 "trsvcid": "$NVMF_PORT", 00:28:33.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.145 "hdgst": ${hdgst:-false}, 00:28:33.145 "ddgst": ${ddgst:-false} 00:28:33.145 }, 00:28:33.145 "method": "bdev_nvme_attach_controller" 00:28:33.145 } 00:28:33.145 EOF 00:28:33.145 )") 00:28:33.145 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.145 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:33.145 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:33.145 03:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:33.145 "params": { 00:28:33.145 "name": "Nvme1", 00:28:33.145 "trtype": "tcp", 00:28:33.145 "traddr": "10.0.0.2", 00:28:33.145 "adrfam": "ipv4", 00:28:33.145 "trsvcid": "4420", 00:28:33.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:33.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:33.145 "hdgst": false, 00:28:33.145 "ddgst": false 00:28:33.145 }, 00:28:33.145 "method": "bdev_nvme_attach_controller" 00:28:33.145 },{ 00:28:33.145 "params": { 00:28:33.145 "name": "Nvme2", 00:28:33.145 "trtype": "tcp", 00:28:33.145 "traddr": "10.0.0.2", 00:28:33.145 "adrfam": "ipv4", 00:28:33.145 "trsvcid": "4420", 00:28:33.145 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:33.145 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:33.145 "hdgst": false, 00:28:33.145 "ddgst": false 00:28:33.145 }, 00:28:33.145 "method": "bdev_nvme_attach_controller" 00:28:33.145 },{ 00:28:33.145 "params": { 00:28:33.145 "name": "Nvme3", 00:28:33.145 "trtype": "tcp", 00:28:33.145 "traddr": "10.0.0.2", 00:28:33.145 "adrfam": "ipv4", 00:28:33.145 "trsvcid": "4420", 00:28:33.145 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:33.145 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:33.145 "hdgst": false, 00:28:33.145 "ddgst": false 00:28:33.145 }, 00:28:33.145 "method": "bdev_nvme_attach_controller" 00:28:33.145 },{ 00:28:33.145 "params": { 00:28:33.145 "name": "Nvme4", 00:28:33.145 "trtype": "tcp", 00:28:33.145 "traddr": "10.0.0.2", 00:28:33.145 "adrfam": "ipv4", 00:28:33.145 "trsvcid": "4420", 00:28:33.145 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:33.145 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:33.145 "hdgst": false, 00:28:33.145 "ddgst": false 00:28:33.145 }, 00:28:33.145 "method": "bdev_nvme_attach_controller" 00:28:33.145 },{ 00:28:33.145 "params": { 00:28:33.145 "name": "Nvme5", 00:28:33.145 "trtype": "tcp", 00:28:33.145 "traddr": "10.0.0.2", 00:28:33.145 "adrfam": "ipv4", 00:28:33.145 "trsvcid": "4420", 00:28:33.145 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:33.145 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:33.145 "hdgst": false, 00:28:33.145 "ddgst": false 00:28:33.145 }, 00:28:33.145 "method": "bdev_nvme_attach_controller" 00:28:33.145 },{ 00:28:33.145 "params": { 00:28:33.145 "name": "Nvme6", 00:28:33.145 "trtype": "tcp", 00:28:33.145 "traddr": "10.0.0.2", 00:28:33.145 "adrfam": "ipv4", 00:28:33.145 "trsvcid": "4420", 00:28:33.145 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:33.145 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:33.145 "hdgst": false, 00:28:33.145 "ddgst": false 00:28:33.145 }, 00:28:33.145 "method": "bdev_nvme_attach_controller" 00:28:33.145 },{ 00:28:33.145 "params": { 00:28:33.145 "name": "Nvme7", 00:28:33.145 "trtype": "tcp", 00:28:33.145 "traddr": "10.0.0.2", 00:28:33.145 "adrfam": "ipv4", 00:28:33.145 "trsvcid": "4420", 00:28:33.145 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:33.145 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:33.145 "hdgst": false, 00:28:33.145 "ddgst": false 00:28:33.145 }, 00:28:33.145 "method": "bdev_nvme_attach_controller" 00:28:33.145 },{ 00:28:33.145 "params": { 00:28:33.145 "name": "Nvme8", 00:28:33.145 "trtype": "tcp", 00:28:33.145 "traddr": "10.0.0.2", 00:28:33.145 "adrfam": "ipv4", 00:28:33.145 "trsvcid": "4420", 00:28:33.145 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:33.145 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:33.145 "hdgst": false, 00:28:33.145 "ddgst": false 00:28:33.145 }, 00:28:33.145 "method": "bdev_nvme_attach_controller" 00:28:33.145 },{ 00:28:33.145 "params": { 00:28:33.145 "name": "Nvme9", 00:28:33.145 "trtype": "tcp", 00:28:33.145 "traddr": "10.0.0.2", 00:28:33.145 "adrfam": "ipv4", 00:28:33.145 "trsvcid": "4420", 00:28:33.145 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:33.145 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:33.145 "hdgst": false, 00:28:33.145 "ddgst": false 00:28:33.145 }, 00:28:33.145 "method": "bdev_nvme_attach_controller" 00:28:33.145 },{ 00:28:33.145 "params": { 00:28:33.145 "name": "Nvme10", 00:28:33.145 "trtype": "tcp", 00:28:33.145 "traddr": "10.0.0.2", 00:28:33.145 "adrfam": "ipv4", 00:28:33.145 "trsvcid": "4420", 00:28:33.145 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:33.145 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:33.145 "hdgst": false, 00:28:33.145 "ddgst": false 00:28:33.145 }, 00:28:33.145 "method": "bdev_nvme_attach_controller" 00:28:33.145 }' 00:28:33.145 [2024-11-19 03:09:43.537392] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:28:33.145 [2024-11-19 03:09:43.537467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid327097 ] 00:28:33.145 [2024-11-19 03:09:43.608694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.145 [2024-11-19 03:09:43.655201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.046 Running I/O for 10 seconds... 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:35.046 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 326920 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 326920 ']' 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 326920 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.305 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 326920 00:28:35.579 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:35.579 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:35.579 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 326920' 00:28:35.579 killing process with pid 326920 00:28:35.579 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 326920 00:28:35.579 03:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 326920 00:28:35.579 [2024-11-19 03:09:45.949746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.949872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.949890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.949904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.949916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.949929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.949943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.949961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.949973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.949985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.950023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.950036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.950048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.950060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.950072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.950084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.950096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.950107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.950120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.950132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.950144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.950156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.950167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.950180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.950192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.579 [2024-11-19 03:09:45.950203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.950644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5c00 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.580 [2024-11-19 03:09:45.952866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.952880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.952892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.952905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.952918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.952939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.952952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.952964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.952978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.952990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.953003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.953019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.953032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.953045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.953058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.953071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.953083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.953095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d87b0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.953866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.581 [2024-11-19 03:09:45.953909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.581 [2024-11-19 03:09:45.953927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.581 [2024-11-19 03:09:45.953942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.581 [2024-11-19 03:09:45.953965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.581 [2024-11-19 03:09:45.953980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.581 [2024-11-19 03:09:45.953995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.581 [2024-11-19 03:09:45.954008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.581 [2024-11-19 03:09:45.954021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeb450 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.954989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.955367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d60d0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.957367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.957401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.957418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.957431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.957444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.957458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.581 [2024-11-19 03:09:45.957471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.957994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.958279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d65a0 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.959955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.959992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.582 [2024-11-19 03:09:45.960366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.960888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6a90 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.962625] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.583 [2024-11-19 03:09:45.963031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with [2024-11-19 03:09:45.963534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:1the state(6) to be set 00:28:35.583 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.963567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.963581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.963597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.963598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.963614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.583 [2024-11-19 03:09:45.963625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.583 [2024-11-19 03:09:45.963631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.583 [2024-11-19 03:09:45.963638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.584 [2024-11-19 03:09:45.963652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.584 [2024-11-19 03:09:45.963678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.584 [2024-11-19 03:09:45.963702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:1[2024-11-19 03:09:45.963719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.584 the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with [2024-11-19 03:09:45.963740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:35.584 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.584 [2024-11-19 03:09:45.963754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.584 [2024-11-19 03:09:45.963767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.584 [2024-11-19 03:09:45.963780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.584 [2024-11-19 03:09:45.963793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 03:09:45.963807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.584 the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.584 [2024-11-19 03:09:45.963833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.584 [2024-11-19 03:09:45.963846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.584 [2024-11-19 03:09:45.963859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 03:09:45.963872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.584 the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.584 [2024-11-19 03:09:45.963902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.584 [2024-11-19 03:09:45.963914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:1[2024-11-19 03:09:45.963927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.584 the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with [2024-11-19 03:09:45.963942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:35.584 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.584 [2024-11-19 03:09:45.963957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.584 [2024-11-19 03:09:45.963986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.963992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.584 [2024-11-19 03:09:45.963998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.964009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:1[2024-11-19 03:09:45.964011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.584 the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.964025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with [2024-11-19 03:09:45.964025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:35.584 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.584 [2024-11-19 03:09:45.964040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.964044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.584 [2024-11-19 03:09:45.964052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.964058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.584 [2024-11-19 03:09:45.964065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.964075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.584 [2024-11-19 03:09:45.964077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.964089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.584 [2024-11-19 03:09:45.964096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.964105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.584 [2024-11-19 03:09:45.964109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.964120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.584 [2024-11-19 03:09:45.964122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.964136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with [2024-11-19 03:09:45.964136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:1the state(6) to be set 00:28:35.584 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.584 [2024-11-19 03:09:45.964150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.964152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.584 [2024-11-19 03:09:45.964162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.584 [2024-11-19 03:09:45.964168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.584 [2024-11-19 03:09:45.964175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with [2024-11-19 03:09:45.964199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:1the state(6) to be set 00:28:35.585 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7900 is same with the state(6) to be set 00:28:35.585 [2024-11-19 03:09:45.964409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.964966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.964980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.965018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.965034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.965049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.965062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.965077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.965091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.965107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.965121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.965135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.965149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.965164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.585 [2024-11-19 03:09:45.965178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.585 [2024-11-19 03:09:45.965369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.965392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 [2024-11-19 03:09:45.965413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.965429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 [2024-11-19 03:09:45.965445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.965459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 [2024-11-19 03:09:45.965476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.965490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 03:09:45.965484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:1[2024-11-19 03:09:45.965511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 03:09:45.965531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.965561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 [2024-11-19 03:09:45.965574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.965587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 [2024-11-19 03:09:45.965599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.965625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 [2024-11-19 03:09:45.965638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.965650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 [2024-11-19 03:09:45.965663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.965696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 [2024-11-19 03:09:45.965712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.965739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 [2024-11-19 03:09:45.965754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.965767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 [2024-11-19 03:09:45.965780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:1[2024-11-19 03:09:45.965793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 03:09:45.965809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.965839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 [2024-11-19 03:09:45.965852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.965865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 [2024-11-19 03:09:45.965878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with [2024-11-19 03:09:45.965891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:1the state(6) to be set 00:28:35.586 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.965905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with [2024-11-19 03:09:45.965907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:35.586 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 [2024-11-19 03:09:45.965920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with [2024-11-19 03:09:45.965924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:1the state(6) to be set 00:28:35.586 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.965945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with [2024-11-19 03:09:45.965947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:35.586 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 [2024-11-19 03:09:45.965960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.965973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.965979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 [2024-11-19 03:09:45.965987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.966010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.966014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.966026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with [2024-11-19 03:09:45.966026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:35.586 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 [2024-11-19 03:09:45.966040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.966045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.966052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.966059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.586 [2024-11-19 03:09:45.966065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.586 [2024-11-19 03:09:45.966075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.586 [2024-11-19 03:09:45.966077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 03:09:45.966090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with [2024-11-19 03:09:45.966167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:1the state(6) to be set 00:28:35.587 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with [2024-11-19 03:09:45.966246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:1the state(6) to be set 00:28:35.587 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:1[2024-11-19 03:09:45.966312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with [2024-11-19 03:09:45.966326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:35.587 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 03:09:45.966366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7df0 is same with the state(6) to be set 00:28:35.587 [2024-11-19 03:09:45.966398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.966974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.966990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.967004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.967034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.587 [2024-11-19 03:09:45.967048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.587 [2024-11-19 03:09:45.967063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.588 [2024-11-19 03:09:45.967076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.588 [2024-11-19 03:09:45.967092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.588 [2024-11-19 03:09:45.967105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.588 [2024-11-19 03:09:45.967124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.588 [2024-11-19 03:09:45.967139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.588 [2024-11-19 03:09:45.967154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.588 [2024-11-19 03:09:45.967168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.588 [2024-11-19 03:09:45.967183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.588 [2024-11-19 03:09:45.967197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.588 [2024-11-19 03:09:45.967198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.588 [2024-11-19 03:09:45.967226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.588 [2024-11-19 03:09:45.967228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:12[2024-11-19 03:09:45.967243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.588 the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 03:09:45.967258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.588 the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.588 [2024-11-19 03:09:45.967287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.588 [2024-11-19 03:09:45.967299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.588 [2024-11-19 03:09:45.967312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.588 [2024-11-19 03:09:45.967324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.588 [2024-11-19 03:09:45.967337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 03:09:45.967350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.588 the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.588 [2024-11-19 03:09:45.967380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.588 [2024-11-19 03:09:45.967392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.588 [2024-11-19 03:09:45.967405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.588 [2024-11-19 03:09:45.967418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.588 [2024-11-19 03:09:45.967442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.588 [2024-11-19 03:09:45.967455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.588 [2024-11-19 03:09:45.967468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.588 [2024-11-19 03:09:45.967480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with [2024-11-19 03:09:45.967494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:12the state(6) to be set 00:28:35.588 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.588 [2024-11-19 03:09:45.967508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.588 [2024-11-19 03:09:45.967521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967832] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.588 [2024-11-19 03:09:45.967851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.588 [2024-11-19 03:09:45.967892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.967904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.967915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.967927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.967939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with [2024-11-19 03:09:45.967934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(6) to be set 00:28:35.589 id:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.967954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.967959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.967967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.967976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.967980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.967991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.967998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe5280 is same w[2024-11-19 03:09:45.968063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with ith the state(6) to be set 00:28:35.589 the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-11-19 03:09:45.968150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:35.589 the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d82c0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb33f50 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026940 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046990 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.968926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10274a0 is same with the state(6) to be set 00:28:35.589 [2024-11-19 03:09:45.968972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.968993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.969008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.969021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.589 [2024-11-19 03:09:45.969035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.589 [2024-11-19 03:09:45.969048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.969062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.590 [2024-11-19 03:09:45.969075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.969087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe41b0 is same with the state(6) to be set 00:28:35.590 [2024-11-19 03:09:45.969131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.590 [2024-11-19 03:09:45.969152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.969167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.590 [2024-11-19 03:09:45.969180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.969198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.590 [2024-11-19 03:09:45.969212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.969226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.590 [2024-11-19 03:09:45.969240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.969253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe4cd0 is same with the state(6) to be set 00:28:35.590 [2024-11-19 03:09:45.969292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.590 [2024-11-19 03:09:45.969313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.969327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.590 [2024-11-19 03:09:45.969341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.969355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.590 [2024-11-19 03:09:45.969368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.969381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.590 [2024-11-19 03:09:45.969394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.969406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf30b0 is same with the state(6) to be set 00:28:35.590 [2024-11-19 03:09:45.969440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbeb450 (9): Bad file descriptor 00:28:35.590 [2024-11-19 03:09:45.972117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:35.590 [2024-11-19 03:09:45.972152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:35.590 [2024-11-19 03:09:45.972176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb33f50 (9): Bad file descriptor 00:28:35.590 [2024-11-19 03:09:45.972198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe4cd0 (9): Bad file descriptor 00:28:35.590 [2024-11-19 03:09:45.972264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.972981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.972997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.973011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.973029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.973044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.973061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.590 [2024-11-19 03:09:45.973075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.590 [2024-11-19 03:09:45.973091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.973852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.973866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.985396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.985463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.985481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.985497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.985514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.985540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.985557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.985571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.985588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.985602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.985619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.985633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.985650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.985664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.985680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.985705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.985723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.985737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.985754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.985768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.985785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.985799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.985816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.985831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.985847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.591 [2024-11-19 03:09:45.985861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.591 [2024-11-19 03:09:45.985878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeef30 is same with the state(6) to be set 00:28:35.591 [2024-11-19 03:09:45.986118] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.591 [2024-11-19 03:09:45.986595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe5280 (9): Bad file descriptor 00:28:35.591 [2024-11-19 03:09:45.986646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026940 (9): Bad file descriptor 00:28:35.591 [2024-11-19 03:09:45.986681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1046990 (9): Bad file descriptor 00:28:35.592 [2024-11-19 03:09:45.986732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1059ee0 (9): Bad file descriptor 00:28:35.592 [2024-11-19 03:09:45.986766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10274a0 (9): Bad file descriptor 00:28:35.592 [2024-11-19 03:09:45.986799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe41b0 (9): Bad file descriptor 00:28:35.592 [2024-11-19 03:09:45.986830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf30b0 (9): Bad file descriptor 00:28:35.592 [2024-11-19 03:09:45.988058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.988974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.988989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.989004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.989018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.989034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.989048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.989063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.989077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.989093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.989107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.989122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.989136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.989152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.989169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.989186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.592 [2024-11-19 03:09:45.989200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.592 [2024-11-19 03:09:45.989215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.989983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.989996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.990012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.990026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.990040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1260950 is same with the state(6) to be set 00:28:35.593 [2024-11-19 03:09:45.990208] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.593 [2024-11-19 03:09:45.990985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:35.593 [2024-11-19 03:09:45.991171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.593 [2024-11-19 03:09:45.991202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe4cd0 with addr=10.0.0.2, port=4420 00:28:35.593 [2024-11-19 03:09:45.991219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe4cd0 is same with the state(6) to be set 00:28:35.593 [2024-11-19 03:09:45.991308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.593 [2024-11-19 03:09:45.991333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb33f50 with addr=10.0.0.2, port=4420 00:28:35.593 [2024-11-19 03:09:45.991350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb33f50 is same with the state(6) to be set 00:28:35.593 [2024-11-19 03:09:45.991420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.991442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.991464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.991481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.991497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.991512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.991528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.593 [2024-11-19 03:09:45.991542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.593 [2024-11-19 03:09:45.991558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.991573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.991589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.991608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.991625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.991640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.991656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.991671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.991697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.991715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.991731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.991746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.991762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.991776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.991792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.991806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.991822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.991836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.991852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.991866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.991883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.991897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.991914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.991928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.991944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.991958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.991974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.991988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.594 [2024-11-19 03:09:45.992784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.594 [2024-11-19 03:09:45.992799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.992816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.992829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.992845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.992859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.992875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.992889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.992906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.992920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.992936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.992950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.992966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.992980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.992996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.993010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.993026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.993042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.993057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.993072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.993088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.993102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.993118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.993132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.993148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.993166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.993183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.993198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.993214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.993228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.993244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.993258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.993273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.993287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.993303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.993316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.993332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.993346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.993362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.993376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.993391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.993405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.993419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedd40 is same with the state(6) to be set 00:28:35.595 [2024-11-19 03:09:45.995917] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.595 [2024-11-19 03:09:45.995984] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.595 [2024-11-19 03:09:45.996094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:35.595 [2024-11-19 03:09:45.996123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:35.595 [2024-11-19 03:09:45.996268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.595 [2024-11-19 03:09:45.996298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf30b0 with addr=10.0.0.2, port=4420 00:28:35.595 [2024-11-19 03:09:45.996315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf30b0 is same with the state(6) to be set 00:28:35.595 [2024-11-19 03:09:45.996340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe4cd0 (9): Bad file descriptor 00:28:35.595 [2024-11-19 03:09:45.996361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb33f50 (9): Bad file descriptor 00:28:35.595 [2024-11-19 03:09:45.996807] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.595 [2024-11-19 03:09:45.996960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.595 [2024-11-19 03:09:45.996989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbeb450 with addr=10.0.0.2, port=4420 00:28:35.595 [2024-11-19 03:09:45.997005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeb450 is same with the state(6) to be set 00:28:35.595 [2024-11-19 03:09:45.997094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.595 [2024-11-19 03:09:45.997120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5280 with addr=10.0.0.2, port=4420 00:28:35.595 [2024-11-19 03:09:45.997137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe5280 is same with the state(6) to be set 00:28:35.595 [2024-11-19 03:09:45.997156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf30b0 (9): Bad file descriptor 00:28:35.595 [2024-11-19 03:09:45.997174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:35.595 [2024-11-19 03:09:45.997188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:35.595 [2024-11-19 03:09:45.997205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:35.595 [2024-11-19 03:09:45.997223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:35.595 [2024-11-19 03:09:45.997240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:35.595 [2024-11-19 03:09:45.997253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:35.595 [2024-11-19 03:09:45.997265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:35.595 [2024-11-19 03:09:45.997278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:35.595 [2024-11-19 03:09:45.997950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbeb450 (9): Bad file descriptor 00:28:35.595 [2024-11-19 03:09:45.997980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe5280 (9): Bad file descriptor 00:28:35.595 [2024-11-19 03:09:45.997997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:35.595 [2024-11-19 03:09:45.998011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:35.595 [2024-11-19 03:09:45.998025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:35.595 [2024-11-19 03:09:45.998038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:35.595 [2024-11-19 03:09:45.998145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.998170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.998194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.998210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.998227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.998242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.998264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.998279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.595 [2024-11-19 03:09:45.998295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.595 [2024-11-19 03:09:45.998311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.998978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.998993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.596 [2024-11-19 03:09:45.999516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.596 [2024-11-19 03:09:45.999532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:45.999546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:45.999561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:45.999574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:45.999589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:45.999603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:45.999619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:45.999633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:45.999648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:45.999661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:45.999677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:45.999699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:45.999717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:45.999731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:45.999747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:45.999761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:45.999776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:45.999790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:45.999806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:45.999824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:45.999840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:45.999854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:45.999869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:45.999883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:45.999898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:45.999912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:45.999927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:45.999941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:45.999956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:45.999970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:45.999985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:45.999999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.000014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.000028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.000043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.000056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.000072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.000086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.000102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.000116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.000130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff25e0 is same with the state(6) to be set 00:28:35.597 [2024-11-19 03:09:46.001388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.001412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.001433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.001454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.001472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.001487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.001503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.001517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.001533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.001547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.001562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.001576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.001592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.001606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.001621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.001635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.001651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.001665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.001680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.001702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.001727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.001741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.001757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.001771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.001786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.001801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.001817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.597 [2024-11-19 03:09:46.001842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.597 [2024-11-19 03:09:46.001862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.001878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.001894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.001907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.001923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.001937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.001953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.001967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.001982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.001996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.002977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.002993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.003007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.003023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.003041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.003059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.003073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.598 [2024-11-19 03:09:46.003088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.598 [2024-11-19 03:09:46.003102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.003118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.003132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.003148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.003162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.003177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.003191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.003207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.003221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.003237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.003251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.003267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.003282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.003298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.003311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.003327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.003341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.003358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.003372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.003387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.003401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.003419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff6550 is same with the state(6) to be set 00:28:35.599 [2024-11-19 03:09:46.004645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.004668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.004695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.004713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.004736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.004750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.004766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.004780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.004796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.004810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.004826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.004839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.004855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.004869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.004884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.004898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.004913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.004927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.004942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.004956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.004972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.004985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.599 [2024-11-19 03:09:46.005517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.599 [2024-11-19 03:09:46.005531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.005547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.005561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.005576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.005590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.005606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.005620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.005636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.005650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.005666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.005679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.005702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.005718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.005734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.005748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.005764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.005778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.005794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.005811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.005828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.005842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.005857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.005872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.005888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.005902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.005918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.005932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.005947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.005961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.005978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.005992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.006008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.006022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.006044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.006059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.006074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.006088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.006104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.006118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.006134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.006148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.006164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.006178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.006197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.013753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.013821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.013837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.013854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.013868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.013884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.013898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.013914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.013928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.013944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.013958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.013973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.013988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.014003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.014017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.014033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.014047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.014064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.014079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.014095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.014109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.014126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.014141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.014157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.014180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.014197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.014211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.014226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f38f00 is same with the state(6) to be set 00:28:35.600 [2024-11-19 03:09:46.015580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.015604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.015628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.015644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.015660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.015675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.600 [2024-11-19 03:09:46.015697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.600 [2024-11-19 03:09:46.015713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.015730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.015744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.015760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.015774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.015789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.015803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.015819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.015834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.015850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.015864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.015880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.015895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.015912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.015935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.015952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.015966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.015983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.015997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.601 [2024-11-19 03:09:46.016923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.601 [2024-11-19 03:09:46.016939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.016954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.016969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.016983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.017559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.017573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1240d30 is same with the state(6) to be set 00:28:35.602 [2024-11-19 03:09:46.018814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.018839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.018860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.018875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.018893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.018907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.018923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.018938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.018954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.018968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.018985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.018999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.019015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.019030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.019047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.019061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.019077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.019096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.019114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.019129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.019145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.019159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.019176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.019190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.019206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.019220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.602 [2024-11-19 03:09:46.019236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.602 [2024-11-19 03:09:46.019250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.019985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.019999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.020015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.020030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.020046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.020060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.020076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.020090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.020106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.020121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.020137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.020150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.020166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.020180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.020196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.020210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.020226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.020240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.020256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.020274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.020291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.020306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.020322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.020335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.020352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.020366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.020382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.020396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.020412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.020426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.020442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.603 [2024-11-19 03:09:46.020456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.603 [2024-11-19 03:09:46.020472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.604 [2024-11-19 03:09:46.020486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.604 [2024-11-19 03:09:46.020502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.604 [2024-11-19 03:09:46.020516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.604 [2024-11-19 03:09:46.020533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.604 [2024-11-19 03:09:46.020547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.604 [2024-11-19 03:09:46.020563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.604 [2024-11-19 03:09:46.020577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.604 [2024-11-19 03:09:46.020593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.604 [2024-11-19 03:09:46.020608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.604 [2024-11-19 03:09:46.020624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.604 [2024-11-19 03:09:46.020638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.604 [2024-11-19 03:09:46.020659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.604 [2024-11-19 03:09:46.020674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.604 [2024-11-19 03:09:46.020696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.604 [2024-11-19 03:09:46.020713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.604 [2024-11-19 03:09:46.020729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.604 [2024-11-19 03:09:46.020744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.604 [2024-11-19 03:09:46.020760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.604 [2024-11-19 03:09:46.020775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.604 [2024-11-19 03:09:46.020791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.604 [2024-11-19 03:09:46.020805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.604 [2024-11-19 03:09:46.020819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1242200 is same with the state(6) to be set 00:28:35.604 [2024-11-19 03:09:46.022489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:35.604 [2024-11-19 03:09:46.022526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:35.604 [2024-11-19 03:09:46.022546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:35.604 [2024-11-19 03:09:46.022565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:35.604 [2024-11-19 03:09:46.022583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:35.604 [2024-11-19 03:09:46.022657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:35.604 [2024-11-19 03:09:46.022677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:35.604 [2024-11-19 03:09:46.022704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:35.604 [2024-11-19 03:09:46.022724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:35.604 [2024-11-19 03:09:46.022743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:35.604 [2024-11-19 03:09:46.022756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:35.604 [2024-11-19 03:09:46.022769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:35.604 [2024-11-19 03:09:46.022781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:35.604 [2024-11-19 03:09:46.022853] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:28:35.604 [2024-11-19 03:09:46.022879] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:35.604 [2024-11-19 03:09:46.022997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:35.604 task offset: 20096 on job bdev=Nvme5n1 fails 00:28:35.604 00:28:35.604 Latency(us) 00:28:35.604 [2024-11-19T02:09:46.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.604 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.604 Job: Nvme1n1 ended in about 0.72 seconds with error 00:28:35.604 Verification LBA range: start 0x0 length 0x400 00:28:35.604 Nvme1n1 : 0.72 185.09 11.57 89.07 0.00 230110.09 27573.67 237677.23 00:28:35.604 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.604 Job: Nvme2n1 ended in about 0.71 seconds with error 00:28:35.604 Verification LBA range: start 0x0 length 0x400 00:28:35.604 Nvme2n1 : 0.71 193.82 12.11 89.89 0.00 216623.28 11893.57 253211.69 00:28:35.604 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.604 Job: Nvme3n1 ended in about 0.72 seconds with error 00:28:35.604 Verification LBA range: start 0x0 length 0x400 00:28:35.604 Nvme3n1 : 0.72 177.84 11.11 88.92 0.00 224354.86 19806.44 253211.69 00:28:35.604 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.604 Job: Nvme4n1 ended in about 0.73 seconds with error 00:28:35.604 Verification LBA range: start 0x0 length 0x400 00:28:35.604 Nvme4n1 : 0.73 176.49 11.03 88.25 0.00 219973.91 27962.03 240784.12 00:28:35.604 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.604 Job: Nvme5n1 ended in about 0.69 seconds with error 00:28:35.604 Verification LBA range: start 0x0 length 0x400 00:28:35.604 Nvme5n1 : 0.69 184.22 11.51 92.11 0.00 203674.74 8058.50 248551.35 00:28:35.604 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.604 Job: Nvme6n1 ended in about 0.70 seconds with error 00:28:35.604 Verification LBA range: start 0x0 length 0x400 00:28:35.604 Nvme6n1 : 0.70 183.92 11.50 91.96 0.00 197849.06 8738.13 257872.02 00:28:35.604 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.604 Job: Nvme7n1 ended in about 0.73 seconds with error 00:28:35.604 Verification LBA range: start 0x0 length 0x400 00:28:35.604 Nvme7n1 : 0.73 87.85 5.49 87.85 0.00 304174.27 19806.44 268746.15 00:28:35.604 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.604 Job: Nvme8n1 ended in about 0.74 seconds with error 00:28:35.604 Verification LBA range: start 0x0 length 0x400 00:28:35.604 Nvme8n1 : 0.74 86.56 5.41 86.56 0.00 300714.67 34369.99 281173.71 00:28:35.604 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.604 Job: Nvme9n1 ended in about 0.74 seconds with error 00:28:35.604 Verification LBA range: start 0x0 length 0x400 00:28:35.604 Nvme9n1 : 0.74 86.18 5.39 86.18 0.00 293580.42 21262.79 290494.39 00:28:35.604 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.604 Job: Nvme10n1 ended in about 0.75 seconds with error 00:28:35.604 Verification LBA range: start 0x0 length 0x400 00:28:35.604 Nvme10n1 : 0.75 85.81 5.36 85.81 0.00 286479.93 19806.44 262532.36 00:28:35.604 [2024-11-19T02:09:46.219Z] =================================================================================================================== 00:28:35.604 [2024-11-19T02:09:46.219Z] Total : 1447.78 90.49 886.59 0.00 240123.16 8058.50 290494.39 00:28:35.604 [2024-11-19 03:09:46.052571] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:35.604 [2024-11-19 03:09:46.052662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:35.604 [2024-11-19 03:09:46.052957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.604 [2024-11-19 03:09:46.052994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb33f50 with addr=10.0.0.2, port=4420 00:28:35.604 [2024-11-19 03:09:46.053017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb33f50 is same with the state(6) to be set 00:28:35.604 [2024-11-19 03:09:46.053158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.604 [2024-11-19 03:09:46.053185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe4cd0 with addr=10.0.0.2, port=4420 00:28:35.604 [2024-11-19 03:09:46.053202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe4cd0 is same with the state(6) to be set 00:28:35.604 [2024-11-19 03:09:46.053292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.604 [2024-11-19 03:09:46.053319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe41b0 with addr=10.0.0.2, port=4420 00:28:35.604 [2024-11-19 03:09:46.053347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe41b0 is same with the state(6) to be set 00:28:35.604 [2024-11-19 03:09:46.053433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.604 [2024-11-19 03:09:46.053459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10274a0 with addr=10.0.0.2, port=4420 00:28:35.604 [2024-11-19 03:09:46.053476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10274a0 is same with the state(6) to be set 00:28:35.604 [2024-11-19 03:09:46.053561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.604 [2024-11-19 03:09:46.053588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026940 with addr=10.0.0.2, port=4420 00:28:35.604 [2024-11-19 03:09:46.053604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026940 is same with the state(6) to be set 00:28:35.605 [2024-11-19 03:09:46.055014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:35.605 [2024-11-19 03:09:46.055047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:35.605 [2024-11-19 03:09:46.055067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:35.605 [2024-11-19 03:09:46.055230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.605 [2024-11-19 03:09:46.055260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1046990 with addr=10.0.0.2, port=4420 00:28:35.605 [2024-11-19 03:09:46.055277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046990 is same with the state(6) to be set 00:28:35.605 [2024-11-19 03:09:46.055351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.605 [2024-11-19 03:09:46.055378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1059ee0 with addr=10.0.0.2, port=4420 00:28:35.605 [2024-11-19 03:09:46.055395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(6) to be set 00:28:35.605 [2024-11-19 03:09:46.055422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb33f50 (9): Bad file descriptor 00:28:35.605 [2024-11-19 03:09:46.055448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe4cd0 (9): Bad file descriptor 00:28:35.605 [2024-11-19 03:09:46.055468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe41b0 (9): Bad file descriptor 00:28:35.605 [2024-11-19 03:09:46.055486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10274a0 (9): Bad file descriptor 00:28:35.605 [2024-11-19 03:09:46.055506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026940 (9): Bad file descriptor 00:28:35.605 [2024-11-19 03:09:46.055556] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:35.605 [2024-11-19 03:09:46.055583] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:35.605 [2024-11-19 03:09:46.055603] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:28:35.605 [2024-11-19 03:09:46.055630] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:28:35.605 [2024-11-19 03:09:46.055650] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:35.605 [2024-11-19 03:09:46.056082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.605 [2024-11-19 03:09:46.056113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf30b0 with addr=10.0.0.2, port=4420 00:28:35.605 [2024-11-19 03:09:46.056143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf30b0 is same with the state(6) to be set 00:28:35.605 [2024-11-19 03:09:46.056225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.605 [2024-11-19 03:09:46.056253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5280 with addr=10.0.0.2, port=4420 00:28:35.605 [2024-11-19 03:09:46.056269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe5280 is same with the state(6) to be set 00:28:35.605 [2024-11-19 03:09:46.056348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.605 [2024-11-19 03:09:46.056375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbeb450 with addr=10.0.0.2, port=4420 00:28:35.605 [2024-11-19 03:09:46.056401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeb450 is same with the state(6) to be set 00:28:35.605 [2024-11-19 03:09:46.056420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1046990 (9): Bad file descriptor 00:28:35.605 [2024-11-19 03:09:46.056440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1059ee0 (9): Bad file descriptor 00:28:35.605 [2024-11-19 03:09:46.056459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:35.605 [2024-11-19 03:09:46.056473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:35.605 [2024-11-19 03:09:46.056490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:35.605 [2024-11-19 03:09:46.056508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:35.605 [2024-11-19 03:09:46.056525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:35.605 [2024-11-19 03:09:46.056540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:35.605 [2024-11-19 03:09:46.056553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:35.605 [2024-11-19 03:09:46.056566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:35.605 [2024-11-19 03:09:46.056580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:35.605 [2024-11-19 03:09:46.056592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:35.605 [2024-11-19 03:09:46.056607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:35.605 [2024-11-19 03:09:46.056619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:35.605 [2024-11-19 03:09:46.056634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:35.605 [2024-11-19 03:09:46.056646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:35.605 [2024-11-19 03:09:46.056659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:35.605 [2024-11-19 03:09:46.056673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:35.605 [2024-11-19 03:09:46.056701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:35.605 [2024-11-19 03:09:46.056718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:35.605 [2024-11-19 03:09:46.056732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:35.605 [2024-11-19 03:09:46.056744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:35.605 [2024-11-19 03:09:46.056846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf30b0 (9): Bad file descriptor 00:28:35.605 [2024-11-19 03:09:46.056872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe5280 (9): Bad file descriptor 00:28:35.605 [2024-11-19 03:09:46.056891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbeb450 (9): Bad file descriptor 00:28:35.605 [2024-11-19 03:09:46.056907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:35.605 [2024-11-19 03:09:46.056921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:35.605 [2024-11-19 03:09:46.056935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:35.605 [2024-11-19 03:09:46.056949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:35.605 [2024-11-19 03:09:46.056963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:35.605 [2024-11-19 03:09:46.056976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:35.605 [2024-11-19 03:09:46.056989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:35.605 [2024-11-19 03:09:46.057002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:35.605 [2024-11-19 03:09:46.057038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:35.605 [2024-11-19 03:09:46.057055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:35.605 [2024-11-19 03:09:46.057069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:35.605 [2024-11-19 03:09:46.057082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:35.605 [2024-11-19 03:09:46.057096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:35.605 [2024-11-19 03:09:46.057109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:35.605 [2024-11-19 03:09:46.057122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:35.605 [2024-11-19 03:09:46.057135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:35.605 [2024-11-19 03:09:46.057148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:35.605 [2024-11-19 03:09:46.057161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:35.605 [2024-11-19 03:09:46.057175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:35.605 [2024-11-19 03:09:46.057187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:35.864 03:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 327097 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 327097 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 327097 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:37.241 rmmod nvme_tcp 00:28:37.241 rmmod nvme_fabrics 00:28:37.241 rmmod nvme_keyring 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:37.241 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 326920 ']' 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 326920 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 326920 ']' 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 326920 00:28:37.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (326920) - No such process 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 326920 is not found' 00:28:37.242 Process with pid 326920 is not found 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.242 03:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.149 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:39.149 00:28:39.149 real 0m7.193s 00:28:39.149 user 0m17.083s 00:28:39.149 sys 0m1.307s 00:28:39.149 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:39.149 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:39.149 ************************************ 00:28:39.149 END TEST nvmf_shutdown_tc3 00:28:39.149 ************************************ 00:28:39.149 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:39.149 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:39.149 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:39.149 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:39.149 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:39.149 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:39.149 ************************************ 00:28:39.149 START TEST nvmf_shutdown_tc4 00:28:39.149 ************************************ 00:28:39.149 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:39.150 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:39.150 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:39.150 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:39.150 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:39.150 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:39.151 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.151 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.151 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:39.151 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:39.151 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.151 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.151 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.151 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.151 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:39.151 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:39.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:28:39.410 00:28:39.410 --- 10.0.0.2 ping statistics --- 00:28:39.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.410 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:28:39.410 00:28:39.410 --- 10.0.0.1 ping statistics --- 00:28:39.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.410 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=327879 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 327879 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 327879 ']' 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.410 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:39.410 [2024-11-19 03:09:49.905406] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:28:39.410 [2024-11-19 03:09:49.905497] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.410 [2024-11-19 03:09:49.980591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:39.669 [2024-11-19 03:09:50.034231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.669 [2024-11-19 03:09:50.034298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.669 [2024-11-19 03:09:50.034322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.669 [2024-11-19 03:09:50.034333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.669 [2024-11-19 03:09:50.034342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.669 [2024-11-19 03:09:50.035921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.669 [2024-11-19 03:09:50.035959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:39.669 [2024-11-19 03:09:50.036038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:39.669 [2024-11-19 03:09:50.036040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:39.669 [2024-11-19 03:09:50.178275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.669 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.670 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:39.670 Malloc1 00:28:39.670 [2024-11-19 03:09:50.267129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.928 Malloc2 00:28:39.928 Malloc3 00:28:39.928 Malloc4 00:28:39.928 Malloc5 00:28:39.928 Malloc6 00:28:39.928 Malloc7 00:28:40.186 Malloc8 00:28:40.186 Malloc9 00:28:40.186 Malloc10 00:28:40.186 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.186 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:40.186 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:40.186 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:40.186 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=328055 00:28:40.186 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:40.186 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:40.186 [2024-11-19 03:09:50.777155] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:45.456 03:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:45.456 03:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 327879 00:28:45.456 03:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 327879 ']' 00:28:45.456 03:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 327879 00:28:45.456 03:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:28:45.456 03:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.457 03:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 327879 00:28:45.457 03:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:45.457 03:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:45.457 03:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 327879' 00:28:45.457 killing process with pid 327879 00:28:45.457 03:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 327879 00:28:45.457 03:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 327879 00:28:45.457 [2024-11-19 03:09:55.768882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ca30 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.768970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ca30 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.768987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ca30 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.769000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ca30 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.769013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ca30 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.769027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ca30 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.769947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3cf20 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.769983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3cf20 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.769999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3cf20 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.770013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3cf20 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.770026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3cf20 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.770935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3d3f0 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.770971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3d3f0 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.770988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3d3f0 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.771001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3d3f0 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.771014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3d3f0 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.771026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3d3f0 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.771038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3d3f0 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.771051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3d3f0 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 [2024-11-19 03:09:55.772393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3c560 is same with the state(6) to be set 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 starting I/O failed: -6 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 starting I/O failed: -6 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 starting I/O failed: -6 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 starting I/O failed: -6 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 starting I/O failed: -6 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 starting I/O failed: -6 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 starting I/O failed: -6 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 starting I/O failed: -6 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 starting I/O failed: -6 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.457 starting I/O failed: -6 00:28:45.457 [2024-11-19 03:09:55.776102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.457 [2024-11-19 03:09:55.776325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f13850 is same with the state(6) to be set 00:28:45.457 Write completed with error (sct=0, sc=8) 00:28:45.458 [2024-11-19 03:09:55.776356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f13850 is same with the state(6) to be set 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 [2024-11-19 03:09:55.776370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f13850 is same with the state(6) to be set 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 [2024-11-19 03:09:55.777151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 [2024-11-19 03:09:55.778484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.458 Write completed with error (sct=0, sc=8) 00:28:45.458 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 [2024-11-19 03:09:55.780311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.459 NVMe io qpair process completion error 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 [2024-11-19 03:09:55.781622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f15a00 is same with Write completed with error (sct=0, sc=8) 00:28:45.459 the state(6) to be set 00:28:45.459 [2024-11-19 03:09:55.781683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 starting I/O failed: -6 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.459 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 [2024-11-19 03:09:55.782391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f14b90 is same with the state(6) to be set 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 [2024-11-19 03:09:55.782424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f14b90 is same with the state(6) to be set 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 [2024-11-19 03:09:55.782440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f14b90 is same with the state(6) to be set 00:28:45.460 starting I/O failed: -6 00:28:45.460 [2024-11-19 03:09:55.782455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f14b90 is same with Write completed with error (sct=0, sc=8) 00:28:45.460 the state(6) to be set 00:28:45.460 starting I/O failed: -6 00:28:45.460 [2024-11-19 03:09:55.782469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f14b90 is same with the state(6) to be set 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 [2024-11-19 03:09:55.782482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f14b90 is same with the state(6) to be set 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 [2024-11-19 03:09:55.782735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.460 starting I/O failed: -6 00:28:45.460 starting I/O failed: -6 00:28:45.460 starting I/O failed: -6 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 [2024-11-19 03:09:55.784062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.460 Write completed with error (sct=0, sc=8) 00:28:45.460 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 [2024-11-19 03:09:55.786603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.461 NVMe io qpair process completion error 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 [2024-11-19 03:09:55.787931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.461 starting I/O failed: -6 00:28:45.461 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 [2024-11-19 03:09:55.788931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 [2024-11-19 03:09:55.790085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.462 starting I/O failed: -6 00:28:45.462 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 [2024-11-19 03:09:55.791619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.463 NVMe io qpair process completion error 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 starting I/O failed: -6 00:28:45.463 [2024-11-19 03:09:55.792879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.463 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 [2024-11-19 03:09:55.793911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 [2024-11-19 03:09:55.795051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.464 Write completed with error (sct=0, sc=8) 00:28:45.464 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 [2024-11-19 03:09:55.796752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.465 NVMe io qpair process completion error 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.465 starting I/O failed: -6 00:28:45.465 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 [2024-11-19 03:09:55.797900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.466 starting I/O failed: -6 00:28:45.466 starting I/O failed: -6 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 [2024-11-19 03:09:55.799028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.466 starting I/O failed: -6 00:28:45.466 [2024-11-19 03:09:55.800200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.466 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 [2024-11-19 03:09:55.802276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.467 NVMe io qpair process completion error 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 starting I/O failed: -6 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.467 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 [2024-11-19 03:09:55.803572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 [2024-11-19 03:09:55.804635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 [2024-11-19 03:09:55.805797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.468 Write completed with error (sct=0, sc=8) 00:28:45.468 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 [2024-11-19 03:09:55.809048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.469 NVMe io qpair process completion error 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 starting I/O failed: -6 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.469 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 [2024-11-19 03:09:55.810412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 [2024-11-19 03:09:55.811419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 [2024-11-19 03:09:55.812564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.470 Write completed with error (sct=0, sc=8) 00:28:45.470 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 [2024-11-19 03:09:55.815549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.471 NVMe io qpair process completion error 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 [2024-11-19 03:09:55.816791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.471 starting I/O failed: -6 00:28:45.471 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 [2024-11-19 03:09:55.817875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 [2024-11-19 03:09:55.819029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.472 Write completed with error (sct=0, sc=8) 00:28:45.472 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 [2024-11-19 03:09:55.820854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.473 NVMe io qpair process completion error 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 [2024-11-19 03:09:55.822199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.473 starting I/O failed: -6 00:28:45.473 starting I/O failed: -6 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 starting I/O failed: -6 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.473 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 [2024-11-19 03:09:55.823317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 [2024-11-19 03:09:55.824431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.474 starting I/O failed: -6 00:28:45.474 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 [2024-11-19 03:09:55.826191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.475 NVMe io qpair process completion error 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 [2024-11-19 03:09:55.827478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.475 Write completed with error (sct=0, sc=8) 00:28:45.475 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 [2024-11-19 03:09:55.828565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.476 Write completed with error (sct=0, sc=8) 00:28:45.476 starting I/O failed: -6 00:28:45.477 [2024-11-19 03:09:55.829726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 Write completed with error (sct=0, sc=8) 00:28:45.477 starting I/O failed: -6 00:28:45.477 [2024-11-19 03:09:55.833823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.477 NVMe io qpair process completion error 00:28:45.477 Initializing NVMe Controllers 00:28:45.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:45.477 Controller IO queue size 128, less than required. 00:28:45.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:45.477 Controller IO queue size 128, less than required. 00:28:45.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:45.477 Controller IO queue size 128, less than required. 00:28:45.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:45.477 Controller IO queue size 128, less than required. 00:28:45.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.477 Controller IO queue size 128, less than required. 00:28:45.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:45.477 Controller IO queue size 128, less than required. 00:28:45.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:45.477 Controller IO queue size 128, less than required. 00:28:45.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:45.477 Controller IO queue size 128, less than required. 00:28:45.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:45.477 Controller IO queue size 128, less than required. 00:28:45.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:45.477 Controller IO queue size 128, less than required. 00:28:45.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:45.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:45.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:45.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:45.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:45.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:45.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:45.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:45.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:45.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:45.477 Initialization complete. Launching workers. 00:28:45.477 ======================================================== 00:28:45.477 Latency(us) 00:28:45.478 Device Information : IOPS MiB/s Average min max 00:28:45.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1824.11 78.38 70191.07 1004.51 126378.38 00:28:45.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1795.41 77.15 71331.53 907.92 125742.36 00:28:45.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1797.79 77.25 71258.85 827.71 153056.10 00:28:45.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1791.10 76.96 71551.82 943.44 125676.10 00:28:45.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1779.24 76.45 72077.56 926.30 129773.20 00:28:45.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1793.04 77.04 71562.66 1102.16 133371.42 00:28:45.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1837.70 78.96 69849.28 796.95 135721.94 00:28:45.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1801.02 77.39 71294.98 1084.50 119079.22 00:28:45.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1810.73 77.80 70077.49 918.99 119751.27 00:28:45.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1831.87 78.71 69298.26 842.27 118647.51 00:28:45.478 ======================================================== 00:28:45.478 Total : 18062.00 776.10 70841.01 796.95 153056.10 00:28:45.478 00:28:45.478 [2024-11-19 03:09:55.840305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6c140 is same with the state(6) to be set 00:28:45.478 [2024-11-19 03:09:55.840414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84c40 is same with the state(6) to be set 00:28:45.478 [2024-11-19 03:09:55.840472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89b40 is same with the state(6) to be set 00:28:45.478 [2024-11-19 03:09:55.840529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7ae40 is same with the state(6) to be set 00:28:45.478 [2024-11-19 03:09:55.840586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd62330 is same with the state(6) to be set 00:28:45.478 [2024-11-19 03:09:55.840644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd71040 is same with the state(6) to be set 00:28:45.478 [2024-11-19 03:09:55.840707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ea40 is same with the state(6) to be set 00:28:45.478 [2024-11-19 03:09:55.840766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7fd40 is same with the state(6) to be set 00:28:45.478 [2024-11-19 03:09:55.840821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd67240 is same with the state(6) to be set 00:28:45.478 [2024-11-19 03:09:55.840878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f40 is same with the state(6) to be set 00:28:45.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:45.738 03:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 328055 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 328055 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 328055 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:46.679 rmmod nvme_tcp 00:28:46.679 rmmod nvme_fabrics 00:28:46.679 rmmod nvme_keyring 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 327879 ']' 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 327879 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 327879 ']' 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 327879 00:28:46.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (327879) - No such process 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 327879 is not found' 00:28:46.679 Process with pid 327879 is not found 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.679 03:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:49.217 00:28:49.217 real 0m9.674s 00:28:49.217 user 0m22.686s 00:28:49.217 sys 0m5.896s 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:49.217 ************************************ 00:28:49.217 END TEST nvmf_shutdown_tc4 00:28:49.217 ************************************ 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:28:49.217 00:28:49.217 real 0m36.485s 00:28:49.217 user 1m37.491s 00:28:49.217 sys 0m12.016s 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:49.217 ************************************ 00:28:49.217 END TEST nvmf_shutdown 00:28:49.217 ************************************ 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:49.217 ************************************ 00:28:49.217 START TEST nvmf_nsid 00:28:49.217 ************************************ 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:49.217 * Looking for test storage... 00:28:49.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:49.217 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:49.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.218 --rc genhtml_branch_coverage=1 00:28:49.218 --rc genhtml_function_coverage=1 00:28:49.218 --rc genhtml_legend=1 00:28:49.218 --rc geninfo_all_blocks=1 00:28:49.218 --rc geninfo_unexecuted_blocks=1 00:28:49.218 00:28:49.218 ' 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:49.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.218 --rc genhtml_branch_coverage=1 00:28:49.218 --rc genhtml_function_coverage=1 00:28:49.218 --rc genhtml_legend=1 00:28:49.218 --rc geninfo_all_blocks=1 00:28:49.218 --rc geninfo_unexecuted_blocks=1 00:28:49.218 00:28:49.218 ' 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:49.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.218 --rc genhtml_branch_coverage=1 00:28:49.218 --rc genhtml_function_coverage=1 00:28:49.218 --rc genhtml_legend=1 00:28:49.218 --rc geninfo_all_blocks=1 00:28:49.218 --rc geninfo_unexecuted_blocks=1 00:28:49.218 00:28:49.218 ' 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:49.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.218 --rc genhtml_branch_coverage=1 00:28:49.218 --rc genhtml_function_coverage=1 00:28:49.218 --rc genhtml_legend=1 00:28:49.218 --rc geninfo_all_blocks=1 00:28:49.218 --rc geninfo_unexecuted_blocks=1 00:28:49.218 00:28:49.218 ' 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.218 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:49.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:28:49.219 03:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:51.123 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.123 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:51.124 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:51.124 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:51.124 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:51.124 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:51.124 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:51.125 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.125 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.125 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.125 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.125 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:51.125 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:51.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:28:51.384 00:28:51.384 --- 10.0.0.2 ping statistics --- 00:28:51.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.384 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:28:51.384 00:28:51.384 --- 10.0.0.1 ping statistics --- 00:28:51.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.384 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=330793 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 330793 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 330793 ']' 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:51.384 03:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:51.384 [2024-11-19 03:10:01.834390] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:28:51.384 [2024-11-19 03:10:01.834468] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.384 [2024-11-19 03:10:01.905323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.384 [2024-11-19 03:10:01.949212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.384 [2024-11-19 03:10:01.949279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.384 [2024-11-19 03:10:01.949302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:51.384 [2024-11-19 03:10:01.949313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:51.384 [2024-11-19 03:10:01.949322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.384 [2024-11-19 03:10:01.949922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=330812 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=554a6cda-743e-40bc-8011-7289b475d92a 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=a283889b-dc59-4112-bf68-811eba1783bc 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=252b20c8-4663-47cf-923f-4943911a82fe 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:51.643 null0 00:28:51.643 null1 00:28:51.643 null2 00:28:51.643 [2024-11-19 03:10:02.128407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.643 [2024-11-19 03:10:02.138421] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:28:51.643 [2024-11-19 03:10:02.138479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330812 ] 00:28:51.643 [2024-11-19 03:10:02.152609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 330812 /var/tmp/tgt2.sock 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 330812 ']' 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:28:51.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:51.643 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:51.643 [2024-11-19 03:10:02.204483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.643 [2024-11-19 03:10:02.249550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.902 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.902 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:51.902 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:28:52.468 [2024-11-19 03:10:02.885731] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.468 [2024-11-19 03:10:02.901922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:28:52.468 nvme0n1 nvme0n2 00:28:52.468 nvme1n1 00:28:52.468 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:28:52.468 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:28:52.468 03:10:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:53.034 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:28:53.034 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:28:53.034 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:28:53.034 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:28:53.034 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:28:53.034 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:28:53.034 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:28:53.034 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:53.034 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:53.034 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:53.034 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:28:53.034 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:28:53.034 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:28:53.967 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:53.967 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:53.967 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:53.967 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:28:53.967 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:53.967 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 554a6cda-743e-40bc-8011-7289b475d92a 00:28:53.967 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:53.967 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:28:53.967 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:28:53.967 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:28:53.967 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=554a6cda743e40bc80117289b475d92a 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 554A6CDA743E40BC80117289B475D92A 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 554A6CDA743E40BC80117289B475D92A == \5\5\4\A\6\C\D\A\7\4\3\E\4\0\B\C\8\0\1\1\7\2\8\9\B\4\7\5\D\9\2\A ]] 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid a283889b-dc59-4112-bf68-811eba1783bc 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a283889bdc594112bf68811eba1783bc 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A283889BDC594112BF68811EBA1783BC 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ A283889BDC594112BF68811EBA1783BC == \A\2\8\3\8\8\9\B\D\C\5\9\4\1\1\2\B\F\6\8\8\1\1\E\B\A\1\7\8\3\B\C ]] 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:28:54.225 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:54.226 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 252b20c8-4663-47cf-923f-4943911a82fe 00:28:54.226 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:54.226 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:28:54.226 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:28:54.226 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:28:54.226 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:54.226 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=252b20c8466347cf923f4943911a82fe 00:28:54.226 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 252B20C8466347CF923F4943911A82FE 00:28:54.226 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 252B20C8466347CF923F4943911A82FE == \2\5\2\B\2\0\C\8\4\6\6\3\4\7\C\F\9\2\3\F\4\9\4\3\9\1\1\A\8\2\F\E ]] 00:28:54.226 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:28:54.484 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:28:54.484 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:28:54.484 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 330812 00:28:54.484 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 330812 ']' 00:28:54.484 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 330812 00:28:54.484 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:54.484 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:54.484 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 330812 00:28:54.484 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:54.484 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:54.484 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 330812' 00:28:54.484 killing process with pid 330812 00:28:54.484 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 330812 00:28:54.484 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 330812 00:28:54.743 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:28:54.743 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:54.743 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:28:54.743 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:54.743 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:28:54.743 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:54.743 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:54.743 rmmod nvme_tcp 00:28:54.743 rmmod nvme_fabrics 00:28:55.001 rmmod nvme_keyring 00:28:55.001 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:55.001 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:28:55.001 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:28:55.001 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 330793 ']' 00:28:55.001 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 330793 00:28:55.001 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 330793 ']' 00:28:55.001 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 330793 00:28:55.001 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:55.001 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:55.001 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 330793 00:28:55.001 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:55.001 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:55.001 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 330793' 00:28:55.001 killing process with pid 330793 00:28:55.001 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 330793 00:28:55.001 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 330793 00:28:55.259 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:55.259 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:55.259 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:55.259 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:28:55.259 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:28:55.259 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:55.259 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:28:55.259 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:55.259 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:55.259 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.259 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.259 03:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.167 03:10:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:57.167 00:28:57.167 real 0m8.259s 00:28:57.167 user 0m8.076s 00:28:57.167 sys 0m2.533s 00:28:57.167 03:10:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:57.167 03:10:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:57.167 ************************************ 00:28:57.167 END TEST nvmf_nsid 00:28:57.167 ************************************ 00:28:57.167 03:10:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:57.167 00:28:57.167 real 18m11.369s 00:28:57.167 user 50m37.286s 00:28:57.167 sys 3m53.170s 00:28:57.167 03:10:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:57.167 03:10:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:57.167 ************************************ 00:28:57.167 END TEST nvmf_target_extra 00:28:57.167 ************************************ 00:28:57.167 03:10:07 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:57.167 03:10:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:57.167 03:10:07 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:57.167 03:10:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:57.167 ************************************ 00:28:57.167 START TEST nvmf_host 00:28:57.167 ************************************ 00:28:57.167 03:10:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:57.427 * Looking for test storage... 00:28:57.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:57.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.427 --rc genhtml_branch_coverage=1 00:28:57.427 --rc genhtml_function_coverage=1 00:28:57.427 --rc genhtml_legend=1 00:28:57.427 --rc geninfo_all_blocks=1 00:28:57.427 --rc geninfo_unexecuted_blocks=1 00:28:57.427 00:28:57.427 ' 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:57.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.427 --rc genhtml_branch_coverage=1 00:28:57.427 --rc genhtml_function_coverage=1 00:28:57.427 --rc genhtml_legend=1 00:28:57.427 --rc geninfo_all_blocks=1 00:28:57.427 --rc geninfo_unexecuted_blocks=1 00:28:57.427 00:28:57.427 ' 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:57.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.427 --rc genhtml_branch_coverage=1 00:28:57.427 --rc genhtml_function_coverage=1 00:28:57.427 --rc genhtml_legend=1 00:28:57.427 --rc geninfo_all_blocks=1 00:28:57.427 --rc geninfo_unexecuted_blocks=1 00:28:57.427 00:28:57.427 ' 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:57.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.427 --rc genhtml_branch_coverage=1 00:28:57.427 --rc genhtml_function_coverage=1 00:28:57.427 --rc genhtml_legend=1 00:28:57.427 --rc geninfo_all_blocks=1 00:28:57.427 --rc geninfo_unexecuted_blocks=1 00:28:57.427 00:28:57.427 ' 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:57.427 03:10:07 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:57.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.428 ************************************ 00:28:57.428 START TEST nvmf_multicontroller 00:28:57.428 ************************************ 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:57.428 * Looking for test storage... 00:28:57.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:28:57.428 03:10:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:28:57.428 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:57.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.688 --rc genhtml_branch_coverage=1 00:28:57.688 --rc genhtml_function_coverage=1 00:28:57.688 --rc genhtml_legend=1 00:28:57.688 --rc geninfo_all_blocks=1 00:28:57.688 --rc geninfo_unexecuted_blocks=1 00:28:57.688 00:28:57.688 ' 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:57.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.688 --rc genhtml_branch_coverage=1 00:28:57.688 --rc genhtml_function_coverage=1 00:28:57.688 --rc genhtml_legend=1 00:28:57.688 --rc geninfo_all_blocks=1 00:28:57.688 --rc geninfo_unexecuted_blocks=1 00:28:57.688 00:28:57.688 ' 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:57.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.688 --rc genhtml_branch_coverage=1 00:28:57.688 --rc genhtml_function_coverage=1 00:28:57.688 --rc genhtml_legend=1 00:28:57.688 --rc geninfo_all_blocks=1 00:28:57.688 --rc geninfo_unexecuted_blocks=1 00:28:57.688 00:28:57.688 ' 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:57.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.688 --rc genhtml_branch_coverage=1 00:28:57.688 --rc genhtml_function_coverage=1 00:28:57.688 --rc genhtml_legend=1 00:28:57.688 --rc geninfo_all_blocks=1 00:28:57.688 --rc geninfo_unexecuted_blocks=1 00:28:57.688 00:28:57.688 ' 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:57.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.688 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.689 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:57.689 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:57.689 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:28:57.689 03:10:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.593 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:59.594 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:59.594 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:59.594 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:59.594 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:59.594 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:59.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:28:59.853 00:28:59.853 --- 10.0.0.2 ping statistics --- 00:28:59.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.853 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:28:59.853 00:28:59.853 --- 10.0.0.1 ping statistics --- 00:28:59.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.853 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=333250 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 333250 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 333250 ']' 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.853 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:59.853 [2024-11-19 03:10:10.313119] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:28:59.853 [2024-11-19 03:10:10.313190] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.853 [2024-11-19 03:10:10.387036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:59.853 [2024-11-19 03:10:10.435289] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.853 [2024-11-19 03:10:10.435341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.853 [2024-11-19 03:10:10.435360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.853 [2024-11-19 03:10:10.435370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.853 [2024-11-19 03:10:10.435380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.853 [2024-11-19 03:10:10.436849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.853 [2024-11-19 03:10:10.436912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.853 [2024-11-19 03:10:10.436915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.113 [2024-11-19 03:10:10.579138] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.113 Malloc0 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.113 [2024-11-19 03:10:10.641018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.113 [2024-11-19 03:10:10.648889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.113 Malloc1 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=333398 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 333398 /var/tmp/bdevperf.sock 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 333398 ']' 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:00.113 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.114 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:00.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:00.114 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.114 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.372 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.372 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:00.372 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:00.372 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.372 03:10:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.631 NVMe0n1 00:29:00.631 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.631 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:00.631 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:00.631 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.631 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.631 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.631 1 00:29:00.631 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:00.631 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:00.631 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:00.631 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:00.631 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.631 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:00.631 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.631 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:00.631 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.631 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.890 request: 00:29:00.890 { 00:29:00.890 "name": "NVMe0", 00:29:00.890 "trtype": "tcp", 00:29:00.890 "traddr": "10.0.0.2", 00:29:00.890 "adrfam": "ipv4", 00:29:00.890 "trsvcid": "4420", 00:29:00.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:00.890 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:00.890 "hostaddr": "10.0.0.1", 00:29:00.890 "prchk_reftag": false, 00:29:00.890 "prchk_guard": false, 00:29:00.890 "hdgst": false, 00:29:00.890 "ddgst": false, 00:29:00.890 "allow_unrecognized_csi": false, 00:29:00.890 "method": "bdev_nvme_attach_controller", 00:29:00.890 "req_id": 1 00:29:00.890 } 00:29:00.890 Got JSON-RPC error response 00:29:00.890 response: 00:29:00.890 { 00:29:00.890 "code": -114, 00:29:00.890 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:00.890 } 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.890 request: 00:29:00.890 { 00:29:00.890 "name": "NVMe0", 00:29:00.890 "trtype": "tcp", 00:29:00.890 "traddr": "10.0.0.2", 00:29:00.890 "adrfam": "ipv4", 00:29:00.890 "trsvcid": "4420", 00:29:00.890 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:00.890 "hostaddr": "10.0.0.1", 00:29:00.890 "prchk_reftag": false, 00:29:00.890 "prchk_guard": false, 00:29:00.890 "hdgst": false, 00:29:00.890 "ddgst": false, 00:29:00.890 "allow_unrecognized_csi": false, 00:29:00.890 "method": "bdev_nvme_attach_controller", 00:29:00.890 "req_id": 1 00:29:00.890 } 00:29:00.890 Got JSON-RPC error response 00:29:00.890 response: 00:29:00.890 { 00:29:00.890 "code": -114, 00:29:00.890 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:00.890 } 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.890 request: 00:29:00.890 { 00:29:00.890 "name": "NVMe0", 00:29:00.890 "trtype": "tcp", 00:29:00.890 "traddr": "10.0.0.2", 00:29:00.890 "adrfam": "ipv4", 00:29:00.890 "trsvcid": "4420", 00:29:00.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:00.890 "hostaddr": "10.0.0.1", 00:29:00.890 "prchk_reftag": false, 00:29:00.890 "prchk_guard": false, 00:29:00.890 "hdgst": false, 00:29:00.890 "ddgst": false, 00:29:00.890 "multipath": "disable", 00:29:00.890 "allow_unrecognized_csi": false, 00:29:00.890 "method": "bdev_nvme_attach_controller", 00:29:00.890 "req_id": 1 00:29:00.890 } 00:29:00.890 Got JSON-RPC error response 00:29:00.890 response: 00:29:00.890 { 00:29:00.890 "code": -114, 00:29:00.890 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:00.890 } 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.890 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.890 request: 00:29:00.890 { 00:29:00.890 "name": "NVMe0", 00:29:00.890 "trtype": "tcp", 00:29:00.890 "traddr": "10.0.0.2", 00:29:00.890 "adrfam": "ipv4", 00:29:00.890 "trsvcid": "4420", 00:29:00.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:00.891 "hostaddr": "10.0.0.1", 00:29:00.891 "prchk_reftag": false, 00:29:00.891 "prchk_guard": false, 00:29:00.891 "hdgst": false, 00:29:00.891 "ddgst": false, 00:29:00.891 "multipath": "failover", 00:29:00.891 "allow_unrecognized_csi": false, 00:29:00.891 "method": "bdev_nvme_attach_controller", 00:29:00.891 "req_id": 1 00:29:00.891 } 00:29:00.891 Got JSON-RPC error response 00:29:00.891 response: 00:29:00.891 { 00:29:00.891 "code": -114, 00:29:00.891 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:00.891 } 00:29:00.891 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:00.891 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:00.891 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:00.891 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:00.891 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:00.891 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:00.891 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.891 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.149 NVMe0n1 00:29:01.149 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.149 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:01.149 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.149 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.149 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.149 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:01.149 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.149 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.149 00:29:01.149 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.149 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:01.149 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:01.149 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.149 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.149 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.149 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:01.149 03:10:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:02.523 { 00:29:02.523 "results": [ 00:29:02.523 { 00:29:02.523 "job": "NVMe0n1", 00:29:02.523 "core_mask": "0x1", 00:29:02.523 "workload": "write", 00:29:02.523 "status": "finished", 00:29:02.523 "queue_depth": 128, 00:29:02.523 "io_size": 4096, 00:29:02.523 "runtime": 1.005685, 00:29:02.523 "iops": 17302.634522738233, 00:29:02.523 "mibps": 67.58841610444622, 00:29:02.523 "io_failed": 0, 00:29:02.523 "io_timeout": 0, 00:29:02.523 "avg_latency_us": 7386.309920545222, 00:29:02.523 "min_latency_us": 6553.6, 00:29:02.523 "max_latency_us": 15631.54962962963 00:29:02.523 } 00:29:02.523 ], 00:29:02.523 "core_count": 1 00:29:02.523 } 00:29:02.523 03:10:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:02.523 03:10:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.523 03:10:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.523 03:10:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.523 03:10:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:02.523 03:10:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 333398 00:29:02.523 03:10:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 333398 ']' 00:29:02.524 03:10:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 333398 00:29:02.524 03:10:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:02.524 03:10:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.524 03:10:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333398 00:29:02.524 03:10:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:02.524 03:10:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:02.524 03:10:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333398' 00:29:02.524 killing process with pid 333398 00:29:02.524 03:10:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 333398 00:29:02.524 03:10:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 333398 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:02.524 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:02.524 [2024-11-19 03:10:10.757826] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:02.524 [2024-11-19 03:10:10.757925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333398 ] 00:29:02.524 [2024-11-19 03:10:10.827282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.524 [2024-11-19 03:10:10.874353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.524 [2024-11-19 03:10:11.602016] bdev.c:4686:bdev_name_add: *ERROR*: Bdev name b4c38086-1b5a-4b86-b716-cf4c36944bc4 already exists 00:29:02.524 [2024-11-19 03:10:11.602066] bdev.c:7824:bdev_register: *ERROR*: Unable to add uuid:b4c38086-1b5a-4b86-b716-cf4c36944bc4 alias for bdev NVMe1n1 00:29:02.524 [2024-11-19 03:10:11.602080] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:02.524 Running I/O for 1 seconds... 00:29:02.524 17273.00 IOPS, 67.47 MiB/s 00:29:02.524 Latency(us) 00:29:02.524 [2024-11-19T02:10:13.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.524 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:02.524 NVMe0n1 : 1.01 17302.63 67.59 0.00 0.00 7386.31 6553.60 15631.55 00:29:02.524 [2024-11-19T02:10:13.139Z] =================================================================================================================== 00:29:02.524 [2024-11-19T02:10:13.139Z] Total : 17302.63 67.59 0.00 0.00 7386.31 6553.60 15631.55 00:29:02.524 Received shutdown signal, test time was about 1.000000 seconds 00:29:02.524 00:29:02.524 Latency(us) 00:29:02.524 [2024-11-19T02:10:13.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.524 [2024-11-19T02:10:13.139Z] =================================================================================================================== 00:29:02.524 [2024-11-19T02:10:13.139Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:02.524 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:02.524 rmmod nvme_tcp 00:29:02.524 rmmod nvme_fabrics 00:29:02.524 rmmod nvme_keyring 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 333250 ']' 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 333250 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 333250 ']' 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 333250 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.524 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333250 00:29:02.783 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:02.783 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:02.783 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333250' 00:29:02.783 killing process with pid 333250 00:29:02.783 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 333250 00:29:02.783 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 333250 00:29:02.783 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:02.783 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:02.783 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:02.783 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:02.783 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:02.783 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:02.783 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:02.783 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:02.783 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:02.783 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.783 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.783 03:10:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.322 03:10:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:05.322 00:29:05.322 real 0m7.513s 00:29:05.322 user 0m11.954s 00:29:05.322 sys 0m2.341s 00:29:05.322 03:10:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:05.322 03:10:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.322 ************************************ 00:29:05.322 END TEST nvmf_multicontroller 00:29:05.322 ************************************ 00:29:05.322 03:10:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:05.322 03:10:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:05.322 03:10:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:05.322 03:10:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.322 ************************************ 00:29:05.322 START TEST nvmf_aer 00:29:05.322 ************************************ 00:29:05.322 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:05.322 * Looking for test storage... 00:29:05.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:05.322 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:05.322 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:29:05.322 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:05.322 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:05.322 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:05.322 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:05.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.323 --rc genhtml_branch_coverage=1 00:29:05.323 --rc genhtml_function_coverage=1 00:29:05.323 --rc genhtml_legend=1 00:29:05.323 --rc geninfo_all_blocks=1 00:29:05.323 --rc geninfo_unexecuted_blocks=1 00:29:05.323 00:29:05.323 ' 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:05.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.323 --rc genhtml_branch_coverage=1 00:29:05.323 --rc genhtml_function_coverage=1 00:29:05.323 --rc genhtml_legend=1 00:29:05.323 --rc geninfo_all_blocks=1 00:29:05.323 --rc geninfo_unexecuted_blocks=1 00:29:05.323 00:29:05.323 ' 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:05.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.323 --rc genhtml_branch_coverage=1 00:29:05.323 --rc genhtml_function_coverage=1 00:29:05.323 --rc genhtml_legend=1 00:29:05.323 --rc geninfo_all_blocks=1 00:29:05.323 --rc geninfo_unexecuted_blocks=1 00:29:05.323 00:29:05.323 ' 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:05.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.323 --rc genhtml_branch_coverage=1 00:29:05.323 --rc genhtml_function_coverage=1 00:29:05.323 --rc genhtml_legend=1 00:29:05.323 --rc geninfo_all_blocks=1 00:29:05.323 --rc geninfo_unexecuted_blocks=1 00:29:05.323 00:29:05.323 ' 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:05.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.323 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.324 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.324 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:05.324 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:05.324 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:05.324 03:10:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:07.226 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:07.226 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:07.226 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:07.227 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:07.227 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:07.227 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:07.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:29:07.486 00:29:07.486 --- 10.0.0.2 ping statistics --- 00:29:07.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.486 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:29:07.486 00:29:07.486 --- 10.0.0.1 ping statistics --- 00:29:07.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.486 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=335615 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 335615 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 335615 ']' 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.486 03:10:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:07.486 [2024-11-19 03:10:17.971048] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:07.486 [2024-11-19 03:10:17.971119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.486 [2024-11-19 03:10:18.045260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:07.486 [2024-11-19 03:10:18.091553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.486 [2024-11-19 03:10:18.091605] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.486 [2024-11-19 03:10:18.091633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.486 [2024-11-19 03:10:18.091644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.486 [2024-11-19 03:10:18.091653] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.486 [2024-11-19 03:10:18.093194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.486 [2024-11-19 03:10:18.093222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:07.486 [2024-11-19 03:10:18.093279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:07.486 [2024-11-19 03:10:18.093282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:07.745 [2024-11-19 03:10:18.239811] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:07.745 Malloc0 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:07.745 [2024-11-19 03:10:18.308878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:07.745 [ 00:29:07.745 { 00:29:07.745 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:07.745 "subtype": "Discovery", 00:29:07.745 "listen_addresses": [], 00:29:07.745 "allow_any_host": true, 00:29:07.745 "hosts": [] 00:29:07.745 }, 00:29:07.745 { 00:29:07.745 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:07.745 "subtype": "NVMe", 00:29:07.745 "listen_addresses": [ 00:29:07.745 { 00:29:07.745 "trtype": "TCP", 00:29:07.745 "adrfam": "IPv4", 00:29:07.745 "traddr": "10.0.0.2", 00:29:07.745 "trsvcid": "4420" 00:29:07.745 } 00:29:07.745 ], 00:29:07.745 "allow_any_host": true, 00:29:07.745 "hosts": [], 00:29:07.745 "serial_number": "SPDK00000000000001", 00:29:07.745 "model_number": "SPDK bdev Controller", 00:29:07.745 "max_namespaces": 2, 00:29:07.745 "min_cntlid": 1, 00:29:07.745 "max_cntlid": 65519, 00:29:07.745 "namespaces": [ 00:29:07.745 { 00:29:07.745 "nsid": 1, 00:29:07.745 "bdev_name": "Malloc0", 00:29:07.745 "name": "Malloc0", 00:29:07.745 "nguid": "C249BEEB27E0491C9B33E624D1AF34E5", 00:29:07.745 "uuid": "c249beeb-27e0-491c-9b33-e624d1af34e5" 00:29:07.745 } 00:29:07.745 ] 00:29:07.745 } 00:29:07.745 ] 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=335639 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:07.745 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:08.003 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:08.003 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:08.003 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:08.003 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:08.003 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:08.003 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:29:08.003 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:29:08.003 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:08.261 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:08.261 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:08.261 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:08.261 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:08.262 Malloc1 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:08.262 [ 00:29:08.262 { 00:29:08.262 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:08.262 "subtype": "Discovery", 00:29:08.262 "listen_addresses": [], 00:29:08.262 "allow_any_host": true, 00:29:08.262 "hosts": [] 00:29:08.262 }, 00:29:08.262 { 00:29:08.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:08.262 "subtype": "NVMe", 00:29:08.262 "listen_addresses": [ 00:29:08.262 { 00:29:08.262 "trtype": "TCP", 00:29:08.262 "adrfam": "IPv4", 00:29:08.262 "traddr": "10.0.0.2", 00:29:08.262 "trsvcid": "4420" 00:29:08.262 } 00:29:08.262 ], 00:29:08.262 "allow_any_host": true, 00:29:08.262 "hosts": [], 00:29:08.262 "serial_number": "SPDK00000000000001", 00:29:08.262 "model_number": "SPDK bdev Controller", 00:29:08.262 "max_namespaces": 2, 00:29:08.262 "min_cntlid": 1, 00:29:08.262 "max_cntlid": 65519, 00:29:08.262 "namespaces": [ 00:29:08.262 { 00:29:08.262 "nsid": 1, 00:29:08.262 "bdev_name": "Malloc0", 00:29:08.262 "name": "Malloc0", 00:29:08.262 "nguid": "C249BEEB27E0491C9B33E624D1AF34E5", 00:29:08.262 "uuid": "c249beeb-27e0-491c-9b33-e624d1af34e5" 00:29:08.262 }, 00:29:08.262 { 00:29:08.262 "nsid": 2, 00:29:08.262 "bdev_name": "Malloc1", 00:29:08.262 "name": "Malloc1", 00:29:08.262 "nguid": "5465FADE864644DF8671E7159CE8CC44", 00:29:08.262 "uuid": "5465fade-8646-44df-8671-e7159ce8cc44" 00:29:08.262 } 00:29:08.262 ] 00:29:08.262 } 00:29:08.262 ] 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 335639 00:29:08.262 Asynchronous Event Request test 00:29:08.262 Attaching to 10.0.0.2 00:29:08.262 Attached to 10.0.0.2 00:29:08.262 Registering asynchronous event callbacks... 00:29:08.262 Starting namespace attribute notice tests for all controllers... 00:29:08.262 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:08.262 aer_cb - Changed Namespace 00:29:08.262 Cleaning up... 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:08.262 rmmod nvme_tcp 00:29:08.262 rmmod nvme_fabrics 00:29:08.262 rmmod nvme_keyring 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 335615 ']' 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 335615 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 335615 ']' 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 335615 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:08.262 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 335615 00:29:08.521 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:08.521 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:08.521 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 335615' 00:29:08.521 killing process with pid 335615 00:29:08.521 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 335615 00:29:08.521 03:10:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 335615 00:29:08.521 03:10:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:08.521 03:10:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:08.521 03:10:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:08.521 03:10:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:08.521 03:10:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:08.521 03:10:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:08.521 03:10:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:08.521 03:10:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:08.521 03:10:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:08.521 03:10:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.521 03:10:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.521 03:10:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.060 03:10:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:11.060 00:29:11.060 real 0m5.663s 00:29:11.060 user 0m4.809s 00:29:11.060 sys 0m2.020s 00:29:11.060 03:10:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.060 03:10:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.060 ************************************ 00:29:11.060 END TEST nvmf_aer 00:29:11.060 ************************************ 00:29:11.060 03:10:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:11.060 03:10:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:11.060 03:10:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:11.060 03:10:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.060 ************************************ 00:29:11.060 START TEST nvmf_async_init 00:29:11.061 ************************************ 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:11.061 * Looking for test storage... 00:29:11.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:11.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.061 --rc genhtml_branch_coverage=1 00:29:11.061 --rc genhtml_function_coverage=1 00:29:11.061 --rc genhtml_legend=1 00:29:11.061 --rc geninfo_all_blocks=1 00:29:11.061 --rc geninfo_unexecuted_blocks=1 00:29:11.061 00:29:11.061 ' 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:11.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.061 --rc genhtml_branch_coverage=1 00:29:11.061 --rc genhtml_function_coverage=1 00:29:11.061 --rc genhtml_legend=1 00:29:11.061 --rc geninfo_all_blocks=1 00:29:11.061 --rc geninfo_unexecuted_blocks=1 00:29:11.061 00:29:11.061 ' 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:11.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.061 --rc genhtml_branch_coverage=1 00:29:11.061 --rc genhtml_function_coverage=1 00:29:11.061 --rc genhtml_legend=1 00:29:11.061 --rc geninfo_all_blocks=1 00:29:11.061 --rc geninfo_unexecuted_blocks=1 00:29:11.061 00:29:11.061 ' 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:11.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.061 --rc genhtml_branch_coverage=1 00:29:11.061 --rc genhtml_function_coverage=1 00:29:11.061 --rc genhtml_legend=1 00:29:11.061 --rc geninfo_all_blocks=1 00:29:11.061 --rc geninfo_unexecuted_blocks=1 00:29:11.061 00:29:11.061 ' 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:11.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:11.061 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=43dd73e0559d4000aa6c42864367ee7f 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:11.062 03:10:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:12.969 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.969 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:12.970 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:12.970 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:12.970 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.970 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.229 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.229 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.229 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:13.229 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.229 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.229 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.229 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.229 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:29:13.229 00:29:13.229 --- 10.0.0.2 ping statistics --- 00:29:13.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.229 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:29:13.229 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:29:13.230 00:29:13.230 --- 10.0.0.1 ping statistics --- 00:29:13.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.230 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=337711 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 337711 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 337711 ']' 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.230 03:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:13.230 [2024-11-19 03:10:23.835538] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:13.230 [2024-11-19 03:10:23.835614] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.489 [2024-11-19 03:10:23.907386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.489 [2024-11-19 03:10:23.951430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.489 [2024-11-19 03:10:23.951501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.489 [2024-11-19 03:10:23.951524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.489 [2024-11-19 03:10:23.951535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.489 [2024-11-19 03:10:23.951545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.489 [2024-11-19 03:10:23.952174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.489 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.489 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:13.489 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.489 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.489 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:13.489 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.489 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:13.489 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.489 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:13.489 [2024-11-19 03:10:24.090376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.489 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.489 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:13.489 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.489 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:13.489 null0 00:29:13.489 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.489 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:13.489 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.489 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 43dd73e0559d4000aa6c42864367ee7f 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:13.748 [2024-11-19 03:10:24.130646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:13.748 nvme0n1 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.748 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.007 [ 00:29:14.007 { 00:29:14.007 "name": "nvme0n1", 00:29:14.007 "aliases": [ 00:29:14.007 "43dd73e0-559d-4000-aa6c-42864367ee7f" 00:29:14.007 ], 00:29:14.007 "product_name": "NVMe disk", 00:29:14.007 "block_size": 512, 00:29:14.007 "num_blocks": 2097152, 00:29:14.007 "uuid": "43dd73e0-559d-4000-aa6c-42864367ee7f", 00:29:14.007 "numa_id": 0, 00:29:14.007 "assigned_rate_limits": { 00:29:14.007 "rw_ios_per_sec": 0, 00:29:14.007 "rw_mbytes_per_sec": 0, 00:29:14.007 "r_mbytes_per_sec": 0, 00:29:14.007 "w_mbytes_per_sec": 0 00:29:14.007 }, 00:29:14.007 "claimed": false, 00:29:14.007 "zoned": false, 00:29:14.007 "supported_io_types": { 00:29:14.007 "read": true, 00:29:14.007 "write": true, 00:29:14.007 "unmap": false, 00:29:14.007 "flush": true, 00:29:14.007 "reset": true, 00:29:14.007 "nvme_admin": true, 00:29:14.007 "nvme_io": true, 00:29:14.007 "nvme_io_md": false, 00:29:14.007 "write_zeroes": true, 00:29:14.007 "zcopy": false, 00:29:14.007 "get_zone_info": false, 00:29:14.007 "zone_management": false, 00:29:14.007 "zone_append": false, 00:29:14.007 "compare": true, 00:29:14.007 "compare_and_write": true, 00:29:14.007 "abort": true, 00:29:14.007 "seek_hole": false, 00:29:14.007 "seek_data": false, 00:29:14.007 "copy": true, 00:29:14.007 "nvme_iov_md": false 00:29:14.007 }, 00:29:14.007 "memory_domains": [ 00:29:14.007 { 00:29:14.007 "dma_device_id": "system", 00:29:14.007 "dma_device_type": 1 00:29:14.007 } 00:29:14.007 ], 00:29:14.007 "driver_specific": { 00:29:14.007 "nvme": [ 00:29:14.007 { 00:29:14.007 "trid": { 00:29:14.007 "trtype": "TCP", 00:29:14.007 "adrfam": "IPv4", 00:29:14.007 "traddr": "10.0.0.2", 00:29:14.007 "trsvcid": "4420", 00:29:14.007 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:14.007 }, 00:29:14.007 "ctrlr_data": { 00:29:14.007 "cntlid": 1, 00:29:14.007 "vendor_id": "0x8086", 00:29:14.007 "model_number": "SPDK bdev Controller", 00:29:14.007 "serial_number": "00000000000000000000", 00:29:14.007 "firmware_revision": "25.01", 00:29:14.007 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:14.007 "oacs": { 00:29:14.007 "security": 0, 00:29:14.007 "format": 0, 00:29:14.007 "firmware": 0, 00:29:14.007 "ns_manage": 0 00:29:14.007 }, 00:29:14.007 "multi_ctrlr": true, 00:29:14.007 "ana_reporting": false 00:29:14.007 }, 00:29:14.007 "vs": { 00:29:14.007 "nvme_version": "1.3" 00:29:14.007 }, 00:29:14.007 "ns_data": { 00:29:14.007 "id": 1, 00:29:14.007 "can_share": true 00:29:14.007 } 00:29:14.007 } 00:29:14.007 ], 00:29:14.007 "mp_policy": "active_passive" 00:29:14.007 } 00:29:14.007 } 00:29:14.007 ] 00:29:14.007 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.007 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:14.007 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.007 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.007 [2024-11-19 03:10:24.380032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:14.007 [2024-11-19 03:10:24.380111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed1480 (9): Bad file descriptor 00:29:14.007 [2024-11-19 03:10:24.511812] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:14.007 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.007 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:14.007 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.007 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.007 [ 00:29:14.007 { 00:29:14.007 "name": "nvme0n1", 00:29:14.007 "aliases": [ 00:29:14.007 "43dd73e0-559d-4000-aa6c-42864367ee7f" 00:29:14.007 ], 00:29:14.007 "product_name": "NVMe disk", 00:29:14.007 "block_size": 512, 00:29:14.007 "num_blocks": 2097152, 00:29:14.007 "uuid": "43dd73e0-559d-4000-aa6c-42864367ee7f", 00:29:14.007 "numa_id": 0, 00:29:14.007 "assigned_rate_limits": { 00:29:14.007 "rw_ios_per_sec": 0, 00:29:14.007 "rw_mbytes_per_sec": 0, 00:29:14.007 "r_mbytes_per_sec": 0, 00:29:14.007 "w_mbytes_per_sec": 0 00:29:14.007 }, 00:29:14.007 "claimed": false, 00:29:14.007 "zoned": false, 00:29:14.007 "supported_io_types": { 00:29:14.007 "read": true, 00:29:14.007 "write": true, 00:29:14.007 "unmap": false, 00:29:14.007 "flush": true, 00:29:14.007 "reset": true, 00:29:14.007 "nvme_admin": true, 00:29:14.007 "nvme_io": true, 00:29:14.007 "nvme_io_md": false, 00:29:14.007 "write_zeroes": true, 00:29:14.008 "zcopy": false, 00:29:14.008 "get_zone_info": false, 00:29:14.008 "zone_management": false, 00:29:14.008 "zone_append": false, 00:29:14.008 "compare": true, 00:29:14.008 "compare_and_write": true, 00:29:14.008 "abort": true, 00:29:14.008 "seek_hole": false, 00:29:14.008 "seek_data": false, 00:29:14.008 "copy": true, 00:29:14.008 "nvme_iov_md": false 00:29:14.008 }, 00:29:14.008 "memory_domains": [ 00:29:14.008 { 00:29:14.008 "dma_device_id": "system", 00:29:14.008 "dma_device_type": 1 00:29:14.008 } 00:29:14.008 ], 00:29:14.008 "driver_specific": { 00:29:14.008 "nvme": [ 00:29:14.008 { 00:29:14.008 "trid": { 00:29:14.008 "trtype": "TCP", 00:29:14.008 "adrfam": "IPv4", 00:29:14.008 "traddr": "10.0.0.2", 00:29:14.008 "trsvcid": "4420", 00:29:14.008 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:14.008 }, 00:29:14.008 "ctrlr_data": { 00:29:14.008 "cntlid": 2, 00:29:14.008 "vendor_id": "0x8086", 00:29:14.008 "model_number": "SPDK bdev Controller", 00:29:14.008 "serial_number": "00000000000000000000", 00:29:14.008 "firmware_revision": "25.01", 00:29:14.008 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:14.008 "oacs": { 00:29:14.008 "security": 0, 00:29:14.008 "format": 0, 00:29:14.008 "firmware": 0, 00:29:14.008 "ns_manage": 0 00:29:14.008 }, 00:29:14.008 "multi_ctrlr": true, 00:29:14.008 "ana_reporting": false 00:29:14.008 }, 00:29:14.008 "vs": { 00:29:14.008 "nvme_version": "1.3" 00:29:14.008 }, 00:29:14.008 "ns_data": { 00:29:14.008 "id": 1, 00:29:14.008 "can_share": true 00:29:14.008 } 00:29:14.008 } 00:29:14.008 ], 00:29:14.008 "mp_policy": "active_passive" 00:29:14.008 } 00:29:14.008 } 00:29:14.008 ] 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.yo8phNVOM0 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.yo8phNVOM0 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.yo8phNVOM0 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.008 [2024-11-19 03:10:24.564632] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:14.008 [2024-11-19 03:10:24.564773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.008 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.008 [2024-11-19 03:10:24.580695] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:14.267 nvme0n1 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.267 [ 00:29:14.267 { 00:29:14.267 "name": "nvme0n1", 00:29:14.267 "aliases": [ 00:29:14.267 "43dd73e0-559d-4000-aa6c-42864367ee7f" 00:29:14.267 ], 00:29:14.267 "product_name": "NVMe disk", 00:29:14.267 "block_size": 512, 00:29:14.267 "num_blocks": 2097152, 00:29:14.267 "uuid": "43dd73e0-559d-4000-aa6c-42864367ee7f", 00:29:14.267 "numa_id": 0, 00:29:14.267 "assigned_rate_limits": { 00:29:14.267 "rw_ios_per_sec": 0, 00:29:14.267 "rw_mbytes_per_sec": 0, 00:29:14.267 "r_mbytes_per_sec": 0, 00:29:14.267 "w_mbytes_per_sec": 0 00:29:14.267 }, 00:29:14.267 "claimed": false, 00:29:14.267 "zoned": false, 00:29:14.267 "supported_io_types": { 00:29:14.267 "read": true, 00:29:14.267 "write": true, 00:29:14.267 "unmap": false, 00:29:14.267 "flush": true, 00:29:14.267 "reset": true, 00:29:14.267 "nvme_admin": true, 00:29:14.267 "nvme_io": true, 00:29:14.267 "nvme_io_md": false, 00:29:14.267 "write_zeroes": true, 00:29:14.267 "zcopy": false, 00:29:14.267 "get_zone_info": false, 00:29:14.267 "zone_management": false, 00:29:14.267 "zone_append": false, 00:29:14.267 "compare": true, 00:29:14.267 "compare_and_write": true, 00:29:14.267 "abort": true, 00:29:14.267 "seek_hole": false, 00:29:14.267 "seek_data": false, 00:29:14.267 "copy": true, 00:29:14.267 "nvme_iov_md": false 00:29:14.267 }, 00:29:14.267 "memory_domains": [ 00:29:14.267 { 00:29:14.267 "dma_device_id": "system", 00:29:14.267 "dma_device_type": 1 00:29:14.267 } 00:29:14.267 ], 00:29:14.267 "driver_specific": { 00:29:14.267 "nvme": [ 00:29:14.267 { 00:29:14.267 "trid": { 00:29:14.267 "trtype": "TCP", 00:29:14.267 "adrfam": "IPv4", 00:29:14.267 "traddr": "10.0.0.2", 00:29:14.267 "trsvcid": "4421", 00:29:14.267 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:14.267 }, 00:29:14.267 "ctrlr_data": { 00:29:14.267 "cntlid": 3, 00:29:14.267 "vendor_id": "0x8086", 00:29:14.267 "model_number": "SPDK bdev Controller", 00:29:14.267 "serial_number": "00000000000000000000", 00:29:14.267 "firmware_revision": "25.01", 00:29:14.267 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:14.267 "oacs": { 00:29:14.267 "security": 0, 00:29:14.267 "format": 0, 00:29:14.267 "firmware": 0, 00:29:14.267 "ns_manage": 0 00:29:14.267 }, 00:29:14.267 "multi_ctrlr": true, 00:29:14.267 "ana_reporting": false 00:29:14.267 }, 00:29:14.267 "vs": { 00:29:14.267 "nvme_version": "1.3" 00:29:14.267 }, 00:29:14.267 "ns_data": { 00:29:14.267 "id": 1, 00:29:14.267 "can_share": true 00:29:14.267 } 00:29:14.267 } 00:29:14.267 ], 00:29:14.267 "mp_policy": "active_passive" 00:29:14.267 } 00:29:14.267 } 00:29:14.267 ] 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.yo8phNVOM0 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:14.267 rmmod nvme_tcp 00:29:14.267 rmmod nvme_fabrics 00:29:14.267 rmmod nvme_keyring 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 337711 ']' 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 337711 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 337711 ']' 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 337711 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 337711 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 337711' 00:29:14.267 killing process with pid 337711 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 337711 00:29:14.267 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 337711 00:29:14.528 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:14.528 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:14.528 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:14.528 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:14.528 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:14.528 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:14.528 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:14.528 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:14.528 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:14.528 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.528 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.528 03:10:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.437 03:10:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:16.437 00:29:16.437 real 0m5.816s 00:29:16.437 user 0m2.112s 00:29:16.437 sys 0m2.013s 00:29:16.437 03:10:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:16.437 03:10:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.437 ************************************ 00:29:16.437 END TEST nvmf_async_init 00:29:16.437 ************************************ 00:29:16.437 03:10:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:16.437 03:10:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:16.437 03:10:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:16.437 03:10:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.696 ************************************ 00:29:16.696 START TEST dma 00:29:16.696 ************************************ 00:29:16.696 03:10:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:16.696 * Looking for test storage... 00:29:16.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:16.696 03:10:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:16.696 03:10:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:16.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.697 --rc genhtml_branch_coverage=1 00:29:16.697 --rc genhtml_function_coverage=1 00:29:16.697 --rc genhtml_legend=1 00:29:16.697 --rc geninfo_all_blocks=1 00:29:16.697 --rc geninfo_unexecuted_blocks=1 00:29:16.697 00:29:16.697 ' 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:16.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.697 --rc genhtml_branch_coverage=1 00:29:16.697 --rc genhtml_function_coverage=1 00:29:16.697 --rc genhtml_legend=1 00:29:16.697 --rc geninfo_all_blocks=1 00:29:16.697 --rc geninfo_unexecuted_blocks=1 00:29:16.697 00:29:16.697 ' 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:16.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.697 --rc genhtml_branch_coverage=1 00:29:16.697 --rc genhtml_function_coverage=1 00:29:16.697 --rc genhtml_legend=1 00:29:16.697 --rc geninfo_all_blocks=1 00:29:16.697 --rc geninfo_unexecuted_blocks=1 00:29:16.697 00:29:16.697 ' 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:16.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.697 --rc genhtml_branch_coverage=1 00:29:16.697 --rc genhtml_function_coverage=1 00:29:16.697 --rc genhtml_legend=1 00:29:16.697 --rc geninfo_all_blocks=1 00:29:16.697 --rc geninfo_unexecuted_blocks=1 00:29:16.697 00:29:16.697 ' 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:16.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:16.697 00:29:16.697 real 0m0.174s 00:29:16.697 user 0m0.104s 00:29:16.697 sys 0m0.080s 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:16.697 03:10:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:16.697 ************************************ 00:29:16.697 END TEST dma 00:29:16.697 ************************************ 00:29:16.698 03:10:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:16.698 03:10:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:16.698 03:10:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:16.698 03:10:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.698 ************************************ 00:29:16.698 START TEST nvmf_identify 00:29:16.698 ************************************ 00:29:16.698 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:16.961 * Looking for test storage... 00:29:16.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:16.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.961 --rc genhtml_branch_coverage=1 00:29:16.961 --rc genhtml_function_coverage=1 00:29:16.961 --rc genhtml_legend=1 00:29:16.961 --rc geninfo_all_blocks=1 00:29:16.961 --rc geninfo_unexecuted_blocks=1 00:29:16.961 00:29:16.961 ' 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:16.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.961 --rc genhtml_branch_coverage=1 00:29:16.961 --rc genhtml_function_coverage=1 00:29:16.961 --rc genhtml_legend=1 00:29:16.961 --rc geninfo_all_blocks=1 00:29:16.961 --rc geninfo_unexecuted_blocks=1 00:29:16.961 00:29:16.961 ' 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:16.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.961 --rc genhtml_branch_coverage=1 00:29:16.961 --rc genhtml_function_coverage=1 00:29:16.961 --rc genhtml_legend=1 00:29:16.961 --rc geninfo_all_blocks=1 00:29:16.961 --rc geninfo_unexecuted_blocks=1 00:29:16.961 00:29:16.961 ' 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:16.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.961 --rc genhtml_branch_coverage=1 00:29:16.961 --rc genhtml_function_coverage=1 00:29:16.961 --rc genhtml_legend=1 00:29:16.961 --rc geninfo_all_blocks=1 00:29:16.961 --rc geninfo_unexecuted_blocks=1 00:29:16.961 00:29:16.961 ' 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:16.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:16.961 03:10:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.864 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:18.865 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:18.865 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:18.865 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:18.865 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.865 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:19.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:29:19.124 00:29:19.124 --- 10.0.0.2 ping statistics --- 00:29:19.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.124 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:29:19.124 00:29:19.124 --- 10.0.0.1 ping statistics --- 00:29:19.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.124 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=339852 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 339852 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 339852 ']' 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.124 03:10:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:19.382 [2024-11-19 03:10:29.765198] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:19.382 [2024-11-19 03:10:29.765273] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.382 [2024-11-19 03:10:29.837465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:19.382 [2024-11-19 03:10:29.885041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.382 [2024-11-19 03:10:29.885095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.382 [2024-11-19 03:10:29.885120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.382 [2024-11-19 03:10:29.885132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.382 [2024-11-19 03:10:29.885156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.382 [2024-11-19 03:10:29.886592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.382 [2024-11-19 03:10:29.886656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:19.382 [2024-11-19 03:10:29.886731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:19.382 [2024-11-19 03:10:29.886734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.642 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.642 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:19.642 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:19.642 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.642 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:19.642 [2024-11-19 03:10:30.016674] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.642 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.642 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:19.642 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:19.642 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:19.642 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:19.642 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.642 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:19.642 Malloc0 00:29:19.642 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.642 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:19.642 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.642 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:19.642 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.656 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:19.656 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.656 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:19.656 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.656 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:19.656 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.656 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:19.656 [2024-11-19 03:10:30.106191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.656 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.656 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:19.656 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.656 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:19.656 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.656 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:19.656 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.656 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:19.656 [ 00:29:19.656 { 00:29:19.656 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:19.656 "subtype": "Discovery", 00:29:19.656 "listen_addresses": [ 00:29:19.656 { 00:29:19.656 "trtype": "TCP", 00:29:19.656 "adrfam": "IPv4", 00:29:19.656 "traddr": "10.0.0.2", 00:29:19.656 "trsvcid": "4420" 00:29:19.656 } 00:29:19.656 ], 00:29:19.656 "allow_any_host": true, 00:29:19.656 "hosts": [] 00:29:19.656 }, 00:29:19.656 { 00:29:19.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:19.656 "subtype": "NVMe", 00:29:19.656 "listen_addresses": [ 00:29:19.656 { 00:29:19.656 "trtype": "TCP", 00:29:19.656 "adrfam": "IPv4", 00:29:19.656 "traddr": "10.0.0.2", 00:29:19.656 "trsvcid": "4420" 00:29:19.656 } 00:29:19.656 ], 00:29:19.656 "allow_any_host": true, 00:29:19.656 "hosts": [], 00:29:19.656 "serial_number": "SPDK00000000000001", 00:29:19.656 "model_number": "SPDK bdev Controller", 00:29:19.656 "max_namespaces": 32, 00:29:19.657 "min_cntlid": 1, 00:29:19.657 "max_cntlid": 65519, 00:29:19.657 "namespaces": [ 00:29:19.657 { 00:29:19.657 "nsid": 1, 00:29:19.657 "bdev_name": "Malloc0", 00:29:19.657 "name": "Malloc0", 00:29:19.657 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:19.657 "eui64": "ABCDEF0123456789", 00:29:19.657 "uuid": "5065b88a-cf24-4ae5-bd00-a751977965d3" 00:29:19.657 } 00:29:19.657 ] 00:29:19.657 } 00:29:19.657 ] 00:29:19.657 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.657 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:19.657 [2024-11-19 03:10:30.144958] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:19.657 [2024-11-19 03:10:30.145028] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339995 ] 00:29:19.657 [2024-11-19 03:10:30.196348] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:19.657 [2024-11-19 03:10:30.196426] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:19.657 [2024-11-19 03:10:30.196437] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:19.657 [2024-11-19 03:10:30.196459] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:19.657 [2024-11-19 03:10:30.196476] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:19.657 [2024-11-19 03:10:30.200155] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:19.657 [2024-11-19 03:10:30.200223] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c1c650 0 00:29:19.657 [2024-11-19 03:10:30.200364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:19.657 [2024-11-19 03:10:30.200384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:19.657 [2024-11-19 03:10:30.200393] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:19.657 [2024-11-19 03:10:30.200399] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:19.657 [2024-11-19 03:10:30.200446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.200461] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.200469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c1c650) 00:29:19.657 [2024-11-19 03:10:30.200490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:19.657 [2024-11-19 03:10:30.200515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76f40, cid 0, qid 0 00:29:19.657 [2024-11-19 03:10:30.206704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.657 [2024-11-19 03:10:30.206723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.657 [2024-11-19 03:10:30.206731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.206739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76f40) on tqpair=0x1c1c650 00:29:19.657 [2024-11-19 03:10:30.206761] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:19.657 [2024-11-19 03:10:30.206774] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:19.657 [2024-11-19 03:10:30.206784] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:19.657 [2024-11-19 03:10:30.206809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.206818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.206825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c1c650) 00:29:19.657 [2024-11-19 03:10:30.206836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.657 [2024-11-19 03:10:30.206861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76f40, cid 0, qid 0 00:29:19.657 [2024-11-19 03:10:30.206977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.657 [2024-11-19 03:10:30.206989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.657 [2024-11-19 03:10:30.206996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.207003] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76f40) on tqpair=0x1c1c650 00:29:19.657 [2024-11-19 03:10:30.207013] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:19.657 [2024-11-19 03:10:30.207025] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:19.657 [2024-11-19 03:10:30.207038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.207045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.207052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c1c650) 00:29:19.657 [2024-11-19 03:10:30.207062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.657 [2024-11-19 03:10:30.207088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76f40, cid 0, qid 0 00:29:19.657 [2024-11-19 03:10:30.207167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.657 [2024-11-19 03:10:30.207179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.657 [2024-11-19 03:10:30.207186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.207193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76f40) on tqpair=0x1c1c650 00:29:19.657 [2024-11-19 03:10:30.207202] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:19.657 [2024-11-19 03:10:30.207216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:19.657 [2024-11-19 03:10:30.207229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.207236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.207242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c1c650) 00:29:19.657 [2024-11-19 03:10:30.207252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.657 [2024-11-19 03:10:30.207273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76f40, cid 0, qid 0 00:29:19.657 [2024-11-19 03:10:30.207347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.657 [2024-11-19 03:10:30.207359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.657 [2024-11-19 03:10:30.207366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.207372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76f40) on tqpair=0x1c1c650 00:29:19.657 [2024-11-19 03:10:30.207381] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:19.657 [2024-11-19 03:10:30.207398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.207406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.207413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c1c650) 00:29:19.657 [2024-11-19 03:10:30.207423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.657 [2024-11-19 03:10:30.207444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76f40, cid 0, qid 0 00:29:19.657 [2024-11-19 03:10:30.207518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.657 [2024-11-19 03:10:30.207532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.657 [2024-11-19 03:10:30.207539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.207545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76f40) on tqpair=0x1c1c650 00:29:19.657 [2024-11-19 03:10:30.207554] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:19.657 [2024-11-19 03:10:30.207563] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:19.657 [2024-11-19 03:10:30.207575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:19.657 [2024-11-19 03:10:30.207686] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:19.657 [2024-11-19 03:10:30.207704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:19.657 [2024-11-19 03:10:30.207722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.207734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.207741] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c1c650) 00:29:19.657 [2024-11-19 03:10:30.207752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.657 [2024-11-19 03:10:30.207774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76f40, cid 0, qid 0 00:29:19.657 [2024-11-19 03:10:30.207889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.657 [2024-11-19 03:10:30.207903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.657 [2024-11-19 03:10:30.207910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.207916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76f40) on tqpair=0x1c1c650 00:29:19.657 [2024-11-19 03:10:30.207925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:19.657 [2024-11-19 03:10:30.207942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.207951] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.657 [2024-11-19 03:10:30.207957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c1c650) 00:29:19.657 [2024-11-19 03:10:30.207967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.657 [2024-11-19 03:10:30.207988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76f40, cid 0, qid 0 00:29:19.657 [2024-11-19 03:10:30.208064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.657 [2024-11-19 03:10:30.208077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.657 [2024-11-19 03:10:30.208084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.208090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76f40) on tqpair=0x1c1c650 00:29:19.658 [2024-11-19 03:10:30.208098] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:19.658 [2024-11-19 03:10:30.208106] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:19.658 [2024-11-19 03:10:30.208119] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:19.658 [2024-11-19 03:10:30.208136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:19.658 [2024-11-19 03:10:30.208154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.208162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c1c650) 00:29:19.658 [2024-11-19 03:10:30.208172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.658 [2024-11-19 03:10:30.208194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76f40, cid 0, qid 0 00:29:19.658 [2024-11-19 03:10:30.208319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:19.658 [2024-11-19 03:10:30.208334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:19.658 [2024-11-19 03:10:30.208341] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.208348] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c1c650): datao=0, datal=4096, cccid=0 00:29:19.658 [2024-11-19 03:10:30.208356] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c76f40) on tqpair(0x1c1c650): expected_datao=0, payload_size=4096 00:29:19.658 [2024-11-19 03:10:30.208363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.208382] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.208397] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.248786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.658 [2024-11-19 03:10:30.248804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.658 [2024-11-19 03:10:30.248812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.248819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76f40) on tqpair=0x1c1c650 00:29:19.658 [2024-11-19 03:10:30.248833] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:19.658 [2024-11-19 03:10:30.248842] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:19.658 [2024-11-19 03:10:30.248849] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:19.658 [2024-11-19 03:10:30.248865] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:19.658 [2024-11-19 03:10:30.248876] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:19.658 [2024-11-19 03:10:30.248884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:19.658 [2024-11-19 03:10:30.248903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:19.658 [2024-11-19 03:10:30.248918] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.248925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.248932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c1c650) 00:29:19.658 [2024-11-19 03:10:30.248943] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:19.658 [2024-11-19 03:10:30.248966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76f40, cid 0, qid 0 00:29:19.658 [2024-11-19 03:10:30.249060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.658 [2024-11-19 03:10:30.249072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.658 [2024-11-19 03:10:30.249079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76f40) on tqpair=0x1c1c650 00:29:19.658 [2024-11-19 03:10:30.249098] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c1c650) 00:29:19.658 [2024-11-19 03:10:30.249122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:19.658 [2024-11-19 03:10:30.249132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c1c650) 00:29:19.658 [2024-11-19 03:10:30.249154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:19.658 [2024-11-19 03:10:30.249164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c1c650) 00:29:19.658 [2024-11-19 03:10:30.249185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:19.658 [2024-11-19 03:10:30.249199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.658 [2024-11-19 03:10:30.249222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:19.658 [2024-11-19 03:10:30.249231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:19.658 [2024-11-19 03:10:30.249246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:19.658 [2024-11-19 03:10:30.249258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c1c650) 00:29:19.658 [2024-11-19 03:10:30.249275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.658 [2024-11-19 03:10:30.249298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76f40, cid 0, qid 0 00:29:19.658 [2024-11-19 03:10:30.249324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c770c0, cid 1, qid 0 00:29:19.658 [2024-11-19 03:10:30.249332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c77240, cid 2, qid 0 00:29:19.658 [2024-11-19 03:10:30.249340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.658 [2024-11-19 03:10:30.249347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c77540, cid 4, qid 0 00:29:19.658 [2024-11-19 03:10:30.249546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.658 [2024-11-19 03:10:30.249560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.658 [2024-11-19 03:10:30.249567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c77540) on tqpair=0x1c1c650 00:29:19.658 [2024-11-19 03:10:30.249589] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:19.658 [2024-11-19 03:10:30.249599] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:19.658 [2024-11-19 03:10:30.249616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c1c650) 00:29:19.658 [2024-11-19 03:10:30.249637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.658 [2024-11-19 03:10:30.249658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c77540, cid 4, qid 0 00:29:19.658 [2024-11-19 03:10:30.249765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:19.658 [2024-11-19 03:10:30.249779] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:19.658 [2024-11-19 03:10:30.249785] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249792] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c1c650): datao=0, datal=4096, cccid=4 00:29:19.658 [2024-11-19 03:10:30.249799] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c77540) on tqpair(0x1c1c650): expected_datao=0, payload_size=4096 00:29:19.658 [2024-11-19 03:10:30.249807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249817] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249825] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.658 [2024-11-19 03:10:30.249845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.658 [2024-11-19 03:10:30.249856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c77540) on tqpair=0x1c1c650 00:29:19.658 [2024-11-19 03:10:30.249883] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:19.658 [2024-11-19 03:10:30.249921] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c1c650) 00:29:19.658 [2024-11-19 03:10:30.249943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.658 [2024-11-19 03:10:30.249955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.658 [2024-11-19 03:10:30.249968] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c1c650) 00:29:19.658 [2024-11-19 03:10:30.249977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:19.658 [2024-11-19 03:10:30.250006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c77540, cid 4, qid 0 00:29:19.659 [2024-11-19 03:10:30.250018] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c776c0, cid 5, qid 0 00:29:19.659 [2024-11-19 03:10:30.250159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:19.659 [2024-11-19 03:10:30.250173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:19.659 [2024-11-19 03:10:30.250180] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:19.659 [2024-11-19 03:10:30.250186] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c1c650): datao=0, datal=1024, cccid=4 00:29:19.659 [2024-11-19 03:10:30.250194] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c77540) on tqpair(0x1c1c650): expected_datao=0, payload_size=1024 00:29:19.659 [2024-11-19 03:10:30.250201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.659 [2024-11-19 03:10:30.250210] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:19.659 [2024-11-19 03:10:30.250217] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:19.659 [2024-11-19 03:10:30.250226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.659 [2024-11-19 03:10:30.250234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.659 [2024-11-19 03:10:30.250241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.659 [2024-11-19 03:10:30.250247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c776c0) on tqpair=0x1c1c650 00:29:19.920 [2024-11-19 03:10:30.294704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.920 [2024-11-19 03:10:30.294722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.920 [2024-11-19 03:10:30.294730] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.920 [2024-11-19 03:10:30.294737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c77540) on tqpair=0x1c1c650 00:29:19.920 [2024-11-19 03:10:30.294756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.920 [2024-11-19 03:10:30.294765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c1c650) 00:29:19.920 [2024-11-19 03:10:30.294776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.920 [2024-11-19 03:10:30.294807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c77540, cid 4, qid 0 00:29:19.920 [2024-11-19 03:10:30.294940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:19.920 [2024-11-19 03:10:30.294952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:19.920 [2024-11-19 03:10:30.294959] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:19.920 [2024-11-19 03:10:30.294965] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c1c650): datao=0, datal=3072, cccid=4 00:29:19.920 [2024-11-19 03:10:30.294977] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c77540) on tqpair(0x1c1c650): expected_datao=0, payload_size=3072 00:29:19.920 [2024-11-19 03:10:30.294985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.920 [2024-11-19 03:10:30.295005] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:19.920 [2024-11-19 03:10:30.295015] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:19.920 [2024-11-19 03:10:30.338703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.920 [2024-11-19 03:10:30.338723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.920 [2024-11-19 03:10:30.338731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.920 [2024-11-19 03:10:30.338738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c77540) on tqpair=0x1c1c650 00:29:19.920 [2024-11-19 03:10:30.338754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.920 [2024-11-19 03:10:30.338763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c1c650) 00:29:19.920 [2024-11-19 03:10:30.338774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.920 [2024-11-19 03:10:30.338805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c77540, cid 4, qid 0 00:29:19.920 [2024-11-19 03:10:30.338901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:19.920 [2024-11-19 03:10:30.338913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:19.920 [2024-11-19 03:10:30.338920] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:19.920 [2024-11-19 03:10:30.338926] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c1c650): datao=0, datal=8, cccid=4 00:29:19.920 [2024-11-19 03:10:30.338934] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c77540) on tqpair(0x1c1c650): expected_datao=0, payload_size=8 00:29:19.920 [2024-11-19 03:10:30.338941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.920 [2024-11-19 03:10:30.338951] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:19.920 [2024-11-19 03:10:30.338958] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:19.920 [2024-11-19 03:10:30.379780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.920 [2024-11-19 03:10:30.379799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.920 [2024-11-19 03:10:30.379806] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.920 [2024-11-19 03:10:30.379813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c77540) on tqpair=0x1c1c650 00:29:19.920 ===================================================== 00:29:19.920 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:19.920 ===================================================== 00:29:19.920 Controller Capabilities/Features 00:29:19.920 ================================ 00:29:19.920 Vendor ID: 0000 00:29:19.920 Subsystem Vendor ID: 0000 00:29:19.920 Serial Number: .................... 00:29:19.920 Model Number: ........................................ 00:29:19.920 Firmware Version: 25.01 00:29:19.920 Recommended Arb Burst: 0 00:29:19.920 IEEE OUI Identifier: 00 00 00 00:29:19.920 Multi-path I/O 00:29:19.920 May have multiple subsystem ports: No 00:29:19.920 May have multiple controllers: No 00:29:19.920 Associated with SR-IOV VF: No 00:29:19.920 Max Data Transfer Size: 131072 00:29:19.920 Max Number of Namespaces: 0 00:29:19.920 Max Number of I/O Queues: 1024 00:29:19.920 NVMe Specification Version (VS): 1.3 00:29:19.920 NVMe Specification Version (Identify): 1.3 00:29:19.920 Maximum Queue Entries: 128 00:29:19.920 Contiguous Queues Required: Yes 00:29:19.920 Arbitration Mechanisms Supported 00:29:19.920 Weighted Round Robin: Not Supported 00:29:19.920 Vendor Specific: Not Supported 00:29:19.920 Reset Timeout: 15000 ms 00:29:19.920 Doorbell Stride: 4 bytes 00:29:19.920 NVM Subsystem Reset: Not Supported 00:29:19.920 Command Sets Supported 00:29:19.920 NVM Command Set: Supported 00:29:19.920 Boot Partition: Not Supported 00:29:19.920 Memory Page Size Minimum: 4096 bytes 00:29:19.920 Memory Page Size Maximum: 4096 bytes 00:29:19.920 Persistent Memory Region: Not Supported 00:29:19.920 Optional Asynchronous Events Supported 00:29:19.920 Namespace Attribute Notices: Not Supported 00:29:19.920 Firmware Activation Notices: Not Supported 00:29:19.920 ANA Change Notices: Not Supported 00:29:19.920 PLE Aggregate Log Change Notices: Not Supported 00:29:19.920 LBA Status Info Alert Notices: Not Supported 00:29:19.920 EGE Aggregate Log Change Notices: Not Supported 00:29:19.920 Normal NVM Subsystem Shutdown event: Not Supported 00:29:19.920 Zone Descriptor Change Notices: Not Supported 00:29:19.920 Discovery Log Change Notices: Supported 00:29:19.920 Controller Attributes 00:29:19.920 128-bit Host Identifier: Not Supported 00:29:19.920 Non-Operational Permissive Mode: Not Supported 00:29:19.920 NVM Sets: Not Supported 00:29:19.920 Read Recovery Levels: Not Supported 00:29:19.920 Endurance Groups: Not Supported 00:29:19.920 Predictable Latency Mode: Not Supported 00:29:19.920 Traffic Based Keep ALive: Not Supported 00:29:19.920 Namespace Granularity: Not Supported 00:29:19.920 SQ Associations: Not Supported 00:29:19.920 UUID List: Not Supported 00:29:19.920 Multi-Domain Subsystem: Not Supported 00:29:19.920 Fixed Capacity Management: Not Supported 00:29:19.920 Variable Capacity Management: Not Supported 00:29:19.920 Delete Endurance Group: Not Supported 00:29:19.920 Delete NVM Set: Not Supported 00:29:19.920 Extended LBA Formats Supported: Not Supported 00:29:19.920 Flexible Data Placement Supported: Not Supported 00:29:19.920 00:29:19.920 Controller Memory Buffer Support 00:29:19.920 ================================ 00:29:19.920 Supported: No 00:29:19.920 00:29:19.920 Persistent Memory Region Support 00:29:19.920 ================================ 00:29:19.920 Supported: No 00:29:19.920 00:29:19.920 Admin Command Set Attributes 00:29:19.920 ============================ 00:29:19.920 Security Send/Receive: Not Supported 00:29:19.920 Format NVM: Not Supported 00:29:19.920 Firmware Activate/Download: Not Supported 00:29:19.920 Namespace Management: Not Supported 00:29:19.920 Device Self-Test: Not Supported 00:29:19.920 Directives: Not Supported 00:29:19.920 NVMe-MI: Not Supported 00:29:19.920 Virtualization Management: Not Supported 00:29:19.920 Doorbell Buffer Config: Not Supported 00:29:19.920 Get LBA Status Capability: Not Supported 00:29:19.920 Command & Feature Lockdown Capability: Not Supported 00:29:19.920 Abort Command Limit: 1 00:29:19.920 Async Event Request Limit: 4 00:29:19.920 Number of Firmware Slots: N/A 00:29:19.920 Firmware Slot 1 Read-Only: N/A 00:29:19.920 Firmware Activation Without Reset: N/A 00:29:19.920 Multiple Update Detection Support: N/A 00:29:19.920 Firmware Update Granularity: No Information Provided 00:29:19.920 Per-Namespace SMART Log: No 00:29:19.920 Asymmetric Namespace Access Log Page: Not Supported 00:29:19.920 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:19.920 Command Effects Log Page: Not Supported 00:29:19.920 Get Log Page Extended Data: Supported 00:29:19.920 Telemetry Log Pages: Not Supported 00:29:19.920 Persistent Event Log Pages: Not Supported 00:29:19.920 Supported Log Pages Log Page: May Support 00:29:19.920 Commands Supported & Effects Log Page: Not Supported 00:29:19.920 Feature Identifiers & Effects Log Page:May Support 00:29:19.920 NVMe-MI Commands & Effects Log Page: May Support 00:29:19.920 Data Area 4 for Telemetry Log: Not Supported 00:29:19.920 Error Log Page Entries Supported: 128 00:29:19.920 Keep Alive: Not Supported 00:29:19.920 00:29:19.920 NVM Command Set Attributes 00:29:19.920 ========================== 00:29:19.920 Submission Queue Entry Size 00:29:19.920 Max: 1 00:29:19.921 Min: 1 00:29:19.921 Completion Queue Entry Size 00:29:19.921 Max: 1 00:29:19.921 Min: 1 00:29:19.921 Number of Namespaces: 0 00:29:19.921 Compare Command: Not Supported 00:29:19.921 Write Uncorrectable Command: Not Supported 00:29:19.921 Dataset Management Command: Not Supported 00:29:19.921 Write Zeroes Command: Not Supported 00:29:19.921 Set Features Save Field: Not Supported 00:29:19.921 Reservations: Not Supported 00:29:19.921 Timestamp: Not Supported 00:29:19.921 Copy: Not Supported 00:29:19.921 Volatile Write Cache: Not Present 00:29:19.921 Atomic Write Unit (Normal): 1 00:29:19.921 Atomic Write Unit (PFail): 1 00:29:19.921 Atomic Compare & Write Unit: 1 00:29:19.921 Fused Compare & Write: Supported 00:29:19.921 Scatter-Gather List 00:29:19.921 SGL Command Set: Supported 00:29:19.921 SGL Keyed: Supported 00:29:19.921 SGL Bit Bucket Descriptor: Not Supported 00:29:19.921 SGL Metadata Pointer: Not Supported 00:29:19.921 Oversized SGL: Not Supported 00:29:19.921 SGL Metadata Address: Not Supported 00:29:19.921 SGL Offset: Supported 00:29:19.921 Transport SGL Data Block: Not Supported 00:29:19.921 Replay Protected Memory Block: Not Supported 00:29:19.921 00:29:19.921 Firmware Slot Information 00:29:19.921 ========================= 00:29:19.921 Active slot: 0 00:29:19.921 00:29:19.921 00:29:19.921 Error Log 00:29:19.921 ========= 00:29:19.921 00:29:19.921 Active Namespaces 00:29:19.921 ================= 00:29:19.921 Discovery Log Page 00:29:19.921 ================== 00:29:19.921 Generation Counter: 2 00:29:19.921 Number of Records: 2 00:29:19.921 Record Format: 0 00:29:19.921 00:29:19.921 Discovery Log Entry 0 00:29:19.921 ---------------------- 00:29:19.921 Transport Type: 3 (TCP) 00:29:19.921 Address Family: 1 (IPv4) 00:29:19.921 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:19.921 Entry Flags: 00:29:19.921 Duplicate Returned Information: 1 00:29:19.921 Explicit Persistent Connection Support for Discovery: 1 00:29:19.921 Transport Requirements: 00:29:19.921 Secure Channel: Not Required 00:29:19.921 Port ID: 0 (0x0000) 00:29:19.921 Controller ID: 65535 (0xffff) 00:29:19.921 Admin Max SQ Size: 128 00:29:19.921 Transport Service Identifier: 4420 00:29:19.921 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:19.921 Transport Address: 10.0.0.2 00:29:19.921 Discovery Log Entry 1 00:29:19.921 ---------------------- 00:29:19.921 Transport Type: 3 (TCP) 00:29:19.921 Address Family: 1 (IPv4) 00:29:19.921 Subsystem Type: 2 (NVM Subsystem) 00:29:19.921 Entry Flags: 00:29:19.921 Duplicate Returned Information: 0 00:29:19.921 Explicit Persistent Connection Support for Discovery: 0 00:29:19.921 Transport Requirements: 00:29:19.921 Secure Channel: Not Required 00:29:19.921 Port ID: 0 (0x0000) 00:29:19.921 Controller ID: 65535 (0xffff) 00:29:19.921 Admin Max SQ Size: 128 00:29:19.921 Transport Service Identifier: 4420 00:29:19.921 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:19.921 Transport Address: 10.0.0.2 [2024-11-19 03:10:30.379931] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:19.921 [2024-11-19 03:10:30.379954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76f40) on tqpair=0x1c1c650 00:29:19.921 [2024-11-19 03:10:30.379968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.921 [2024-11-19 03:10:30.379977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c770c0) on tqpair=0x1c1c650 00:29:19.921 [2024-11-19 03:10:30.379985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.921 [2024-11-19 03:10:30.379993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c77240) on tqpair=0x1c1c650 00:29:19.921 [2024-11-19 03:10:30.380000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.921 [2024-11-19 03:10:30.380008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.921 [2024-11-19 03:10:30.380016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.921 [2024-11-19 03:10:30.380034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.921 [2024-11-19 03:10:30.380047] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.921 [2024-11-19 03:10:30.380054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.921 [2024-11-19 03:10:30.380065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.921 [2024-11-19 03:10:30.380091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.921 [2024-11-19 03:10:30.380190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.921 [2024-11-19 03:10:30.380204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.921 [2024-11-19 03:10:30.380211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.921 [2024-11-19 03:10:30.380218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.921 [2024-11-19 03:10:30.380230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.921 [2024-11-19 03:10:30.380238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.921 [2024-11-19 03:10:30.380245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.921 [2024-11-19 03:10:30.380255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.921 [2024-11-19 03:10:30.380282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.921 [2024-11-19 03:10:30.380372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.921 [2024-11-19 03:10:30.380386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.921 [2024-11-19 03:10:30.380392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.921 [2024-11-19 03:10:30.380399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.921 [2024-11-19 03:10:30.380408] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:19.921 [2024-11-19 03:10:30.380415] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:19.921 [2024-11-19 03:10:30.380431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.921 [2024-11-19 03:10:30.380440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.921 [2024-11-19 03:10:30.380446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.921 [2024-11-19 03:10:30.380457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.921 [2024-11-19 03:10:30.380478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.921 [2024-11-19 03:10:30.380554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.921 [2024-11-19 03:10:30.380568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.921 [2024-11-19 03:10:30.380575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.921 [2024-11-19 03:10:30.380581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.921 [2024-11-19 03:10:30.380598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.921 [2024-11-19 03:10:30.380607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.921 [2024-11-19 03:10:30.380614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.921 [2024-11-19 03:10:30.380624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.921 [2024-11-19 03:10:30.380645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.921 [2024-11-19 03:10:30.380726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.921 [2024-11-19 03:10:30.380740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.921 [2024-11-19 03:10:30.380747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.921 [2024-11-19 03:10:30.380758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.921 [2024-11-19 03:10:30.380775] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.921 [2024-11-19 03:10:30.380785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.921 [2024-11-19 03:10:30.380791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.921 [2024-11-19 03:10:30.380801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.921 [2024-11-19 03:10:30.380823] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.921 [2024-11-19 03:10:30.380900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.921 [2024-11-19 03:10:30.380913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.921 [2024-11-19 03:10:30.380920] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.921 [2024-11-19 03:10:30.380927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.921 [2024-11-19 03:10:30.380942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.921 [2024-11-19 03:10:30.380951] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.921 [2024-11-19 03:10:30.380958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.921 [2024-11-19 03:10:30.380968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.921 [2024-11-19 03:10:30.380989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.921 [2024-11-19 03:10:30.381062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.921 [2024-11-19 03:10:30.381076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.922 [2024-11-19 03:10:30.381082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.922 [2024-11-19 03:10:30.381105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.922 [2024-11-19 03:10:30.381131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.922 [2024-11-19 03:10:30.381151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.922 [2024-11-19 03:10:30.381244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.922 [2024-11-19 03:10:30.381257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.922 [2024-11-19 03:10:30.381264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.922 [2024-11-19 03:10:30.381287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381296] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381302] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.922 [2024-11-19 03:10:30.381312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.922 [2024-11-19 03:10:30.381333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.922 [2024-11-19 03:10:30.381414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.922 [2024-11-19 03:10:30.381426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.922 [2024-11-19 03:10:30.381432] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.922 [2024-11-19 03:10:30.381458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381475] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.922 [2024-11-19 03:10:30.381485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.922 [2024-11-19 03:10:30.381506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.922 [2024-11-19 03:10:30.381594] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.922 [2024-11-19 03:10:30.381606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.922 [2024-11-19 03:10:30.381613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.922 [2024-11-19 03:10:30.381634] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.922 [2024-11-19 03:10:30.381660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.922 [2024-11-19 03:10:30.381681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.922 [2024-11-19 03:10:30.381764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.922 [2024-11-19 03:10:30.381777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.922 [2024-11-19 03:10:30.381783] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.922 [2024-11-19 03:10:30.381805] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381821] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.922 [2024-11-19 03:10:30.381831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.922 [2024-11-19 03:10:30.381852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.922 [2024-11-19 03:10:30.381922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.922 [2024-11-19 03:10:30.381934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.922 [2024-11-19 03:10:30.381940] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381947] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.922 [2024-11-19 03:10:30.381962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.381978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.922 [2024-11-19 03:10:30.381988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.922 [2024-11-19 03:10:30.382009] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.922 [2024-11-19 03:10:30.382085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.922 [2024-11-19 03:10:30.382099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.922 [2024-11-19 03:10:30.382106] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.382112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.922 [2024-11-19 03:10:30.382128] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.382141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.382149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.922 [2024-11-19 03:10:30.382159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.922 [2024-11-19 03:10:30.382180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.922 [2024-11-19 03:10:30.382272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.922 [2024-11-19 03:10:30.382286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.922 [2024-11-19 03:10:30.382292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.382299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.922 [2024-11-19 03:10:30.382315] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.382324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.382330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.922 [2024-11-19 03:10:30.382341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.922 [2024-11-19 03:10:30.382361] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.922 [2024-11-19 03:10:30.382430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.922 [2024-11-19 03:10:30.382442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.922 [2024-11-19 03:10:30.382449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.382455] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.922 [2024-11-19 03:10:30.382471] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.382480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.382486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.922 [2024-11-19 03:10:30.382496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.922 [2024-11-19 03:10:30.382517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.922 [2024-11-19 03:10:30.382606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.922 [2024-11-19 03:10:30.382618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.922 [2024-11-19 03:10:30.382624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.382631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.922 [2024-11-19 03:10:30.382646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.382655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.382662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.922 [2024-11-19 03:10:30.382672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.922 [2024-11-19 03:10:30.382702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.922 [2024-11-19 03:10:30.382774] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.922 [2024-11-19 03:10:30.382786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.922 [2024-11-19 03:10:30.382792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.382799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.922 [2024-11-19 03:10:30.382814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.382824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.382834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.922 [2024-11-19 03:10:30.382845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.922 [2024-11-19 03:10:30.382866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.922 [2024-11-19 03:10:30.382955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.922 [2024-11-19 03:10:30.382967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.922 [2024-11-19 03:10:30.382973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.382980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.922 [2024-11-19 03:10:30.382995] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.383004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.922 [2024-11-19 03:10:30.383011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.922 [2024-11-19 03:10:30.383021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.922 [2024-11-19 03:10:30.383041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.923 [2024-11-19 03:10:30.383115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.923 [2024-11-19 03:10:30.383128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.923 [2024-11-19 03:10:30.383135] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.383141] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.923 [2024-11-19 03:10:30.383157] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.383167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.383173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.923 [2024-11-19 03:10:30.383183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.923 [2024-11-19 03:10:30.383204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.923 [2024-11-19 03:10:30.383281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.923 [2024-11-19 03:10:30.383294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.923 [2024-11-19 03:10:30.383300] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.383307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.923 [2024-11-19 03:10:30.383323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.383332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.383338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.923 [2024-11-19 03:10:30.383349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.923 [2024-11-19 03:10:30.383369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.923 [2024-11-19 03:10:30.383439] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.923 [2024-11-19 03:10:30.383452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.923 [2024-11-19 03:10:30.383459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.383465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.923 [2024-11-19 03:10:30.383482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.383491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.383497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.923 [2024-11-19 03:10:30.383512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.923 [2024-11-19 03:10:30.383534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.923 [2024-11-19 03:10:30.383606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.923 [2024-11-19 03:10:30.383618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.923 [2024-11-19 03:10:30.383624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.383631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.923 [2024-11-19 03:10:30.383646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.383655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.383662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.923 [2024-11-19 03:10:30.383672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.923 [2024-11-19 03:10:30.387711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.923 [2024-11-19 03:10:30.387733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.923 [2024-11-19 03:10:30.387744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.923 [2024-11-19 03:10:30.387750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.387757] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.923 [2024-11-19 03:10:30.387774] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.387784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.387791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c1c650) 00:29:19.923 [2024-11-19 03:10:30.387801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.923 [2024-11-19 03:10:30.387823] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c773c0, cid 3, qid 0 00:29:19.923 [2024-11-19 03:10:30.387937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.923 [2024-11-19 03:10:30.387951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.923 [2024-11-19 03:10:30.387958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.387964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c773c0) on tqpair=0x1c1c650 00:29:19.923 [2024-11-19 03:10:30.387978] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:29:19.923 00:29:19.923 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:19.923 [2024-11-19 03:10:30.423396] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:19.923 [2024-11-19 03:10:30.423439] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339998 ] 00:29:19.923 [2024-11-19 03:10:30.472779] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:19.923 [2024-11-19 03:10:30.472840] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:19.923 [2024-11-19 03:10:30.472851] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:19.923 [2024-11-19 03:10:30.472871] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:19.923 [2024-11-19 03:10:30.472886] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:19.923 [2024-11-19 03:10:30.476971] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:19.923 [2024-11-19 03:10:30.477010] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xbf7650 0 00:29:19.923 [2024-11-19 03:10:30.477206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:19.923 [2024-11-19 03:10:30.477224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:19.923 [2024-11-19 03:10:30.477231] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:19.923 [2024-11-19 03:10:30.477237] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:19.923 [2024-11-19 03:10:30.477272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.477283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.477290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7650) 00:29:19.923 [2024-11-19 03:10:30.477305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:19.923 [2024-11-19 03:10:30.477330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc51f40, cid 0, qid 0 00:29:19.923 [2024-11-19 03:10:30.484723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.923 [2024-11-19 03:10:30.484741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.923 [2024-11-19 03:10:30.484749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.484756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc51f40) on tqpair=0xbf7650 00:29:19.923 [2024-11-19 03:10:30.484773] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:19.923 [2024-11-19 03:10:30.484785] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:19.923 [2024-11-19 03:10:30.484794] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:19.923 [2024-11-19 03:10:30.484812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.484821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.484828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7650) 00:29:19.923 [2024-11-19 03:10:30.484839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.923 [2024-11-19 03:10:30.484863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc51f40, cid 0, qid 0 00:29:19.923 [2024-11-19 03:10:30.485044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.923 [2024-11-19 03:10:30.485057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.923 [2024-11-19 03:10:30.485064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.485071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc51f40) on tqpair=0xbf7650 00:29:19.923 [2024-11-19 03:10:30.485079] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:19.923 [2024-11-19 03:10:30.485092] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:19.923 [2024-11-19 03:10:30.485105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.485112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.485119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7650) 00:29:19.923 [2024-11-19 03:10:30.485129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.923 [2024-11-19 03:10:30.485156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc51f40, cid 0, qid 0 00:29:19.923 [2024-11-19 03:10:30.485237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.923 [2024-11-19 03:10:30.485251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.923 [2024-11-19 03:10:30.485258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.923 [2024-11-19 03:10:30.485265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc51f40) on tqpair=0xbf7650 00:29:19.923 [2024-11-19 03:10:30.485274] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:19.923 [2024-11-19 03:10:30.485287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:19.924 [2024-11-19 03:10:30.485300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.485308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.485314] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7650) 00:29:19.924 [2024-11-19 03:10:30.485325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.924 [2024-11-19 03:10:30.485346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc51f40, cid 0, qid 0 00:29:19.924 [2024-11-19 03:10:30.485416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.924 [2024-11-19 03:10:30.485429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.924 [2024-11-19 03:10:30.485435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.485442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc51f40) on tqpair=0xbf7650 00:29:19.924 [2024-11-19 03:10:30.485450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:19.924 [2024-11-19 03:10:30.485466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.485476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.485482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7650) 00:29:19.924 [2024-11-19 03:10:30.485493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.924 [2024-11-19 03:10:30.485514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc51f40, cid 0, qid 0 00:29:19.924 [2024-11-19 03:10:30.485589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.924 [2024-11-19 03:10:30.485603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.924 [2024-11-19 03:10:30.485610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.485616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc51f40) on tqpair=0xbf7650 00:29:19.924 [2024-11-19 03:10:30.485624] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:19.924 [2024-11-19 03:10:30.485632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:19.924 [2024-11-19 03:10:30.485645] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:19.924 [2024-11-19 03:10:30.485755] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:19.924 [2024-11-19 03:10:30.485766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:19.924 [2024-11-19 03:10:30.485778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.485785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.485792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7650) 00:29:19.924 [2024-11-19 03:10:30.485806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.924 [2024-11-19 03:10:30.485830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc51f40, cid 0, qid 0 00:29:19.924 [2024-11-19 03:10:30.485912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.924 [2024-11-19 03:10:30.485924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.924 [2024-11-19 03:10:30.485931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.485938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc51f40) on tqpair=0xbf7650 00:29:19.924 [2024-11-19 03:10:30.485946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:19.924 [2024-11-19 03:10:30.485962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.485971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.485978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7650) 00:29:19.924 [2024-11-19 03:10:30.485988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.924 [2024-11-19 03:10:30.486009] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc51f40, cid 0, qid 0 00:29:19.924 [2024-11-19 03:10:30.486087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.924 [2024-11-19 03:10:30.486100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.924 [2024-11-19 03:10:30.486107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.486114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc51f40) on tqpair=0xbf7650 00:29:19.924 [2024-11-19 03:10:30.486121] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:19.924 [2024-11-19 03:10:30.486129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:19.924 [2024-11-19 03:10:30.486142] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:19.924 [2024-11-19 03:10:30.486157] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:19.924 [2024-11-19 03:10:30.486170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.486177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7650) 00:29:19.924 [2024-11-19 03:10:30.486188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.924 [2024-11-19 03:10:30.486209] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc51f40, cid 0, qid 0 00:29:19.924 [2024-11-19 03:10:30.486320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:19.924 [2024-11-19 03:10:30.486334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:19.924 [2024-11-19 03:10:30.486341] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.486347] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf7650): datao=0, datal=4096, cccid=0 00:29:19.924 [2024-11-19 03:10:30.486355] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc51f40) on tqpair(0xbf7650): expected_datao=0, payload_size=4096 00:29:19.924 [2024-11-19 03:10:30.486362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.486379] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.486388] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.486417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.924 [2024-11-19 03:10:30.486432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.924 [2024-11-19 03:10:30.486440] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.486446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc51f40) on tqpair=0xbf7650 00:29:19.924 [2024-11-19 03:10:30.486457] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:19.924 [2024-11-19 03:10:30.486465] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:19.924 [2024-11-19 03:10:30.486473] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:19.924 [2024-11-19 03:10:30.486484] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:19.924 [2024-11-19 03:10:30.486492] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:19.924 [2024-11-19 03:10:30.486501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:19.924 [2024-11-19 03:10:30.486520] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:19.924 [2024-11-19 03:10:30.486533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.486541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.486547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7650) 00:29:19.924 [2024-11-19 03:10:30.486558] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:19.924 [2024-11-19 03:10:30.486580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc51f40, cid 0, qid 0 00:29:19.924 [2024-11-19 03:10:30.486656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.924 [2024-11-19 03:10:30.486668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.924 [2024-11-19 03:10:30.486675] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.486681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc51f40) on tqpair=0xbf7650 00:29:19.924 [2024-11-19 03:10:30.486698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.486708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.924 [2024-11-19 03:10:30.486714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7650) 00:29:19.924 [2024-11-19 03:10:30.486724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:19.925 [2024-11-19 03:10:30.486734] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.486741] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.486748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xbf7650) 00:29:19.925 [2024-11-19 03:10:30.486757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:19.925 [2024-11-19 03:10:30.486766] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.486773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.486779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xbf7650) 00:29:19.925 [2024-11-19 03:10:30.486788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:19.925 [2024-11-19 03:10:30.486798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.486804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.486811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:19.925 [2024-11-19 03:10:30.486823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:19.925 [2024-11-19 03:10:30.486833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:19.925 [2024-11-19 03:10:30.486848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:19.925 [2024-11-19 03:10:30.486860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.486867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf7650) 00:29:19.925 [2024-11-19 03:10:30.486877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.925 [2024-11-19 03:10:30.486900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc51f40, cid 0, qid 0 00:29:19.925 [2024-11-19 03:10:30.486911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc520c0, cid 1, qid 0 00:29:19.925 [2024-11-19 03:10:30.486919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc52240, cid 2, qid 0 00:29:19.925 [2024-11-19 03:10:30.486927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:19.925 [2024-11-19 03:10:30.486935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc52540, cid 4, qid 0 00:29:19.925 [2024-11-19 03:10:30.487071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.925 [2024-11-19 03:10:30.487083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.925 [2024-11-19 03:10:30.487090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.487096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc52540) on tqpair=0xbf7650 00:29:19.925 [2024-11-19 03:10:30.487108] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:19.925 [2024-11-19 03:10:30.487119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:19.925 [2024-11-19 03:10:30.487133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:19.925 [2024-11-19 03:10:30.487145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:19.925 [2024-11-19 03:10:30.487156] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.487163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.487170] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf7650) 00:29:19.925 [2024-11-19 03:10:30.487180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:19.925 [2024-11-19 03:10:30.487201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc52540, cid 4, qid 0 00:29:19.925 [2024-11-19 03:10:30.487314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.925 [2024-11-19 03:10:30.487328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.925 [2024-11-19 03:10:30.487334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.487341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc52540) on tqpair=0xbf7650 00:29:19.925 [2024-11-19 03:10:30.487411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:19.925 [2024-11-19 03:10:30.487431] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:19.925 [2024-11-19 03:10:30.487447] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.487455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf7650) 00:29:19.925 [2024-11-19 03:10:30.487469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.925 [2024-11-19 03:10:30.487491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc52540, cid 4, qid 0 00:29:19.925 [2024-11-19 03:10:30.487584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:19.925 [2024-11-19 03:10:30.487599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:19.925 [2024-11-19 03:10:30.487606] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.487612] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf7650): datao=0, datal=4096, cccid=4 00:29:19.925 [2024-11-19 03:10:30.487620] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc52540) on tqpair(0xbf7650): expected_datao=0, payload_size=4096 00:29:19.925 [2024-11-19 03:10:30.487627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.487644] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.487653] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.527788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:19.925 [2024-11-19 03:10:30.527807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:19.925 [2024-11-19 03:10:30.527814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.527821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc52540) on tqpair=0xbf7650 00:29:19.925 [2024-11-19 03:10:30.527848] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:19.925 [2024-11-19 03:10:30.527867] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:19.925 [2024-11-19 03:10:30.527886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:19.925 [2024-11-19 03:10:30.527900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.527908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf7650) 00:29:19.925 [2024-11-19 03:10:30.527920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.925 [2024-11-19 03:10:30.527943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc52540, cid 4, qid 0 00:29:19.925 [2024-11-19 03:10:30.528054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:19.925 [2024-11-19 03:10:30.528069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:19.925 [2024-11-19 03:10:30.528076] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.528082] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf7650): datao=0, datal=4096, cccid=4 00:29:19.925 [2024-11-19 03:10:30.528090] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc52540) on tqpair(0xbf7650): expected_datao=0, payload_size=4096 00:29:19.925 [2024-11-19 03:10:30.528097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.528107] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:19.925 [2024-11-19 03:10:30.528115] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.186 [2024-11-19 03:10:30.571708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.186 [2024-11-19 03:10:30.571727] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.186 [2024-11-19 03:10:30.571734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.186 [2024-11-19 03:10:30.571741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc52540) on tqpair=0xbf7650 00:29:20.186 [2024-11-19 03:10:30.571768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:20.186 [2024-11-19 03:10:30.571788] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:20.186 [2024-11-19 03:10:30.571807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.186 [2024-11-19 03:10:30.571815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf7650) 00:29:20.186 [2024-11-19 03:10:30.571827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.186 [2024-11-19 03:10:30.571851] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc52540, cid 4, qid 0 00:29:20.186 [2024-11-19 03:10:30.571950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.186 [2024-11-19 03:10:30.571962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.186 [2024-11-19 03:10:30.571969] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.186 [2024-11-19 03:10:30.571975] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf7650): datao=0, datal=4096, cccid=4 00:29:20.186 [2024-11-19 03:10:30.571983] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc52540) on tqpair(0xbf7650): expected_datao=0, payload_size=4096 00:29:20.186 [2024-11-19 03:10:30.571990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.186 [2024-11-19 03:10:30.572007] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.186 [2024-11-19 03:10:30.572015] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.186 [2024-11-19 03:10:30.612847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.186 [2024-11-19 03:10:30.612866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.186 [2024-11-19 03:10:30.612874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.186 [2024-11-19 03:10:30.612881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc52540) on tqpair=0xbf7650 00:29:20.186 [2024-11-19 03:10:30.612896] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:20.186 [2024-11-19 03:10:30.612912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:20.186 [2024-11-19 03:10:30.612930] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:20.186 [2024-11-19 03:10:30.612943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:20.186 [2024-11-19 03:10:30.612952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:20.186 [2024-11-19 03:10:30.612961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:20.186 [2024-11-19 03:10:30.612971] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:20.186 [2024-11-19 03:10:30.612979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:20.186 [2024-11-19 03:10:30.612988] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:20.186 [2024-11-19 03:10:30.613006] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.186 [2024-11-19 03:10:30.613015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf7650) 00:29:20.186 [2024-11-19 03:10:30.613026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.186 [2024-11-19 03:10:30.613037] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.186 [2024-11-19 03:10:30.613045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.186 [2024-11-19 03:10:30.613051] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbf7650) 00:29:20.186 [2024-11-19 03:10:30.613064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.186 [2024-11-19 03:10:30.613092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc52540, cid 4, qid 0 00:29:20.186 [2024-11-19 03:10:30.613104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc526c0, cid 5, qid 0 00:29:20.186 [2024-11-19 03:10:30.613193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.186 [2024-11-19 03:10:30.613206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.186 [2024-11-19 03:10:30.613213] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.186 [2024-11-19 03:10:30.613219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc52540) on tqpair=0xbf7650 00:29:20.186 [2024-11-19 03:10:30.613229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.187 [2024-11-19 03:10:30.613239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.187 [2024-11-19 03:10:30.613245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.613251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc526c0) on tqpair=0xbf7650 00:29:20.187 [2024-11-19 03:10:30.613266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.613276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbf7650) 00:29:20.187 [2024-11-19 03:10:30.613286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.187 [2024-11-19 03:10:30.613307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc526c0, cid 5, qid 0 00:29:20.187 [2024-11-19 03:10:30.613385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.187 [2024-11-19 03:10:30.613399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.187 [2024-11-19 03:10:30.613406] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.613412] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc526c0) on tqpair=0xbf7650 00:29:20.187 [2024-11-19 03:10:30.613428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.613437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbf7650) 00:29:20.187 [2024-11-19 03:10:30.613447] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.187 [2024-11-19 03:10:30.613468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc526c0, cid 5, qid 0 00:29:20.187 [2024-11-19 03:10:30.613553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.187 [2024-11-19 03:10:30.613566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.187 [2024-11-19 03:10:30.613573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.613580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc526c0) on tqpair=0xbf7650 00:29:20.187 [2024-11-19 03:10:30.613595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.613604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbf7650) 00:29:20.187 [2024-11-19 03:10:30.613614] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.187 [2024-11-19 03:10:30.613635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc526c0, cid 5, qid 0 00:29:20.187 [2024-11-19 03:10:30.613720] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.187 [2024-11-19 03:10:30.613734] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.187 [2024-11-19 03:10:30.613741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.613747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc526c0) on tqpair=0xbf7650 00:29:20.187 [2024-11-19 03:10:30.613771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.613784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbf7650) 00:29:20.187 [2024-11-19 03:10:30.613796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.187 [2024-11-19 03:10:30.613808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.613816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf7650) 00:29:20.187 [2024-11-19 03:10:30.613825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.187 [2024-11-19 03:10:30.613836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.613844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xbf7650) 00:29:20.187 [2024-11-19 03:10:30.613853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.187 [2024-11-19 03:10:30.613864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.613872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xbf7650) 00:29:20.187 [2024-11-19 03:10:30.613881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.187 [2024-11-19 03:10:30.613903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc526c0, cid 5, qid 0 00:29:20.187 [2024-11-19 03:10:30.613915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc52540, cid 4, qid 0 00:29:20.187 [2024-11-19 03:10:30.613923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc52840, cid 6, qid 0 00:29:20.187 [2024-11-19 03:10:30.613931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc529c0, cid 7, qid 0 00:29:20.187 [2024-11-19 03:10:30.614126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.187 [2024-11-19 03:10:30.614139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.187 [2024-11-19 03:10:30.614146] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.614152] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf7650): datao=0, datal=8192, cccid=5 00:29:20.187 [2024-11-19 03:10:30.614160] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc526c0) on tqpair(0xbf7650): expected_datao=0, payload_size=8192 00:29:20.187 [2024-11-19 03:10:30.614167] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.614188] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.614198] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.614206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.187 [2024-11-19 03:10:30.614215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.187 [2024-11-19 03:10:30.614221] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.614228] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf7650): datao=0, datal=512, cccid=4 00:29:20.187 [2024-11-19 03:10:30.614235] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc52540) on tqpair(0xbf7650): expected_datao=0, payload_size=512 00:29:20.187 [2024-11-19 03:10:30.614242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.614251] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.614258] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.614266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.187 [2024-11-19 03:10:30.614275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.187 [2024-11-19 03:10:30.614281] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.614291] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf7650): datao=0, datal=512, cccid=6 00:29:20.187 [2024-11-19 03:10:30.614299] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc52840) on tqpair(0xbf7650): expected_datao=0, payload_size=512 00:29:20.187 [2024-11-19 03:10:30.614306] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.614315] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.614322] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.614331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.187 [2024-11-19 03:10:30.614339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.187 [2024-11-19 03:10:30.614345] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.614351] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf7650): datao=0, datal=4096, cccid=7 00:29:20.187 [2024-11-19 03:10:30.614359] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc529c0) on tqpair(0xbf7650): expected_datao=0, payload_size=4096 00:29:20.187 [2024-11-19 03:10:30.614366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.614375] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.614383] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.657724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.187 [2024-11-19 03:10:30.657741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.187 [2024-11-19 03:10:30.657748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.657755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc526c0) on tqpair=0xbf7650 00:29:20.187 [2024-11-19 03:10:30.657792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.187 [2024-11-19 03:10:30.657804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.187 [2024-11-19 03:10:30.657811] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.657817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc52540) on tqpair=0xbf7650 00:29:20.187 [2024-11-19 03:10:30.657833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.187 [2024-11-19 03:10:30.657843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.187 [2024-11-19 03:10:30.657850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.657856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc52840) on tqpair=0xbf7650 00:29:20.187 [2024-11-19 03:10:30.657866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.187 [2024-11-19 03:10:30.657876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.187 [2024-11-19 03:10:30.657882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.187 [2024-11-19 03:10:30.657889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc529c0) on tqpair=0xbf7650 00:29:20.187 ===================================================== 00:29:20.187 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:20.187 ===================================================== 00:29:20.187 Controller Capabilities/Features 00:29:20.187 ================================ 00:29:20.187 Vendor ID: 8086 00:29:20.187 Subsystem Vendor ID: 8086 00:29:20.187 Serial Number: SPDK00000000000001 00:29:20.187 Model Number: SPDK bdev Controller 00:29:20.187 Firmware Version: 25.01 00:29:20.187 Recommended Arb Burst: 6 00:29:20.187 IEEE OUI Identifier: e4 d2 5c 00:29:20.187 Multi-path I/O 00:29:20.187 May have multiple subsystem ports: Yes 00:29:20.187 May have multiple controllers: Yes 00:29:20.187 Associated with SR-IOV VF: No 00:29:20.187 Max Data Transfer Size: 131072 00:29:20.187 Max Number of Namespaces: 32 00:29:20.187 Max Number of I/O Queues: 127 00:29:20.187 NVMe Specification Version (VS): 1.3 00:29:20.187 NVMe Specification Version (Identify): 1.3 00:29:20.187 Maximum Queue Entries: 128 00:29:20.187 Contiguous Queues Required: Yes 00:29:20.187 Arbitration Mechanisms Supported 00:29:20.187 Weighted Round Robin: Not Supported 00:29:20.188 Vendor Specific: Not Supported 00:29:20.188 Reset Timeout: 15000 ms 00:29:20.188 Doorbell Stride: 4 bytes 00:29:20.188 NVM Subsystem Reset: Not Supported 00:29:20.188 Command Sets Supported 00:29:20.188 NVM Command Set: Supported 00:29:20.188 Boot Partition: Not Supported 00:29:20.188 Memory Page Size Minimum: 4096 bytes 00:29:20.188 Memory Page Size Maximum: 4096 bytes 00:29:20.188 Persistent Memory Region: Not Supported 00:29:20.188 Optional Asynchronous Events Supported 00:29:20.188 Namespace Attribute Notices: Supported 00:29:20.188 Firmware Activation Notices: Not Supported 00:29:20.188 ANA Change Notices: Not Supported 00:29:20.188 PLE Aggregate Log Change Notices: Not Supported 00:29:20.188 LBA Status Info Alert Notices: Not Supported 00:29:20.188 EGE Aggregate Log Change Notices: Not Supported 00:29:20.188 Normal NVM Subsystem Shutdown event: Not Supported 00:29:20.188 Zone Descriptor Change Notices: Not Supported 00:29:20.188 Discovery Log Change Notices: Not Supported 00:29:20.188 Controller Attributes 00:29:20.188 128-bit Host Identifier: Supported 00:29:20.188 Non-Operational Permissive Mode: Not Supported 00:29:20.188 NVM Sets: Not Supported 00:29:20.188 Read Recovery Levels: Not Supported 00:29:20.188 Endurance Groups: Not Supported 00:29:20.188 Predictable Latency Mode: Not Supported 00:29:20.188 Traffic Based Keep ALive: Not Supported 00:29:20.188 Namespace Granularity: Not Supported 00:29:20.188 SQ Associations: Not Supported 00:29:20.188 UUID List: Not Supported 00:29:20.188 Multi-Domain Subsystem: Not Supported 00:29:20.188 Fixed Capacity Management: Not Supported 00:29:20.188 Variable Capacity Management: Not Supported 00:29:20.188 Delete Endurance Group: Not Supported 00:29:20.188 Delete NVM Set: Not Supported 00:29:20.188 Extended LBA Formats Supported: Not Supported 00:29:20.188 Flexible Data Placement Supported: Not Supported 00:29:20.188 00:29:20.188 Controller Memory Buffer Support 00:29:20.188 ================================ 00:29:20.188 Supported: No 00:29:20.188 00:29:20.188 Persistent Memory Region Support 00:29:20.188 ================================ 00:29:20.188 Supported: No 00:29:20.188 00:29:20.188 Admin Command Set Attributes 00:29:20.188 ============================ 00:29:20.188 Security Send/Receive: Not Supported 00:29:20.188 Format NVM: Not Supported 00:29:20.188 Firmware Activate/Download: Not Supported 00:29:20.188 Namespace Management: Not Supported 00:29:20.188 Device Self-Test: Not Supported 00:29:20.188 Directives: Not Supported 00:29:20.188 NVMe-MI: Not Supported 00:29:20.188 Virtualization Management: Not Supported 00:29:20.188 Doorbell Buffer Config: Not Supported 00:29:20.188 Get LBA Status Capability: Not Supported 00:29:20.188 Command & Feature Lockdown Capability: Not Supported 00:29:20.188 Abort Command Limit: 4 00:29:20.188 Async Event Request Limit: 4 00:29:20.188 Number of Firmware Slots: N/A 00:29:20.188 Firmware Slot 1 Read-Only: N/A 00:29:20.188 Firmware Activation Without Reset: N/A 00:29:20.188 Multiple Update Detection Support: N/A 00:29:20.188 Firmware Update Granularity: No Information Provided 00:29:20.188 Per-Namespace SMART Log: No 00:29:20.188 Asymmetric Namespace Access Log Page: Not Supported 00:29:20.188 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:20.188 Command Effects Log Page: Supported 00:29:20.188 Get Log Page Extended Data: Supported 00:29:20.188 Telemetry Log Pages: Not Supported 00:29:20.188 Persistent Event Log Pages: Not Supported 00:29:20.188 Supported Log Pages Log Page: May Support 00:29:20.188 Commands Supported & Effects Log Page: Not Supported 00:29:20.188 Feature Identifiers & Effects Log Page:May Support 00:29:20.188 NVMe-MI Commands & Effects Log Page: May Support 00:29:20.188 Data Area 4 for Telemetry Log: Not Supported 00:29:20.188 Error Log Page Entries Supported: 128 00:29:20.188 Keep Alive: Supported 00:29:20.188 Keep Alive Granularity: 10000 ms 00:29:20.188 00:29:20.188 NVM Command Set Attributes 00:29:20.188 ========================== 00:29:20.188 Submission Queue Entry Size 00:29:20.188 Max: 64 00:29:20.188 Min: 64 00:29:20.188 Completion Queue Entry Size 00:29:20.188 Max: 16 00:29:20.188 Min: 16 00:29:20.188 Number of Namespaces: 32 00:29:20.188 Compare Command: Supported 00:29:20.188 Write Uncorrectable Command: Not Supported 00:29:20.188 Dataset Management Command: Supported 00:29:20.188 Write Zeroes Command: Supported 00:29:20.188 Set Features Save Field: Not Supported 00:29:20.188 Reservations: Supported 00:29:20.188 Timestamp: Not Supported 00:29:20.188 Copy: Supported 00:29:20.188 Volatile Write Cache: Present 00:29:20.188 Atomic Write Unit (Normal): 1 00:29:20.188 Atomic Write Unit (PFail): 1 00:29:20.188 Atomic Compare & Write Unit: 1 00:29:20.188 Fused Compare & Write: Supported 00:29:20.188 Scatter-Gather List 00:29:20.188 SGL Command Set: Supported 00:29:20.188 SGL Keyed: Supported 00:29:20.188 SGL Bit Bucket Descriptor: Not Supported 00:29:20.188 SGL Metadata Pointer: Not Supported 00:29:20.188 Oversized SGL: Not Supported 00:29:20.188 SGL Metadata Address: Not Supported 00:29:20.188 SGL Offset: Supported 00:29:20.188 Transport SGL Data Block: Not Supported 00:29:20.188 Replay Protected Memory Block: Not Supported 00:29:20.188 00:29:20.188 Firmware Slot Information 00:29:20.188 ========================= 00:29:20.188 Active slot: 1 00:29:20.188 Slot 1 Firmware Revision: 25.01 00:29:20.188 00:29:20.188 00:29:20.188 Commands Supported and Effects 00:29:20.188 ============================== 00:29:20.188 Admin Commands 00:29:20.188 -------------- 00:29:20.188 Get Log Page (02h): Supported 00:29:20.188 Identify (06h): Supported 00:29:20.188 Abort (08h): Supported 00:29:20.188 Set Features (09h): Supported 00:29:20.188 Get Features (0Ah): Supported 00:29:20.188 Asynchronous Event Request (0Ch): Supported 00:29:20.188 Keep Alive (18h): Supported 00:29:20.188 I/O Commands 00:29:20.188 ------------ 00:29:20.188 Flush (00h): Supported LBA-Change 00:29:20.188 Write (01h): Supported LBA-Change 00:29:20.188 Read (02h): Supported 00:29:20.188 Compare (05h): Supported 00:29:20.188 Write Zeroes (08h): Supported LBA-Change 00:29:20.188 Dataset Management (09h): Supported LBA-Change 00:29:20.188 Copy (19h): Supported LBA-Change 00:29:20.188 00:29:20.188 Error Log 00:29:20.188 ========= 00:29:20.188 00:29:20.188 Arbitration 00:29:20.188 =========== 00:29:20.188 Arbitration Burst: 1 00:29:20.188 00:29:20.188 Power Management 00:29:20.188 ================ 00:29:20.188 Number of Power States: 1 00:29:20.188 Current Power State: Power State #0 00:29:20.188 Power State #0: 00:29:20.188 Max Power: 0.00 W 00:29:20.188 Non-Operational State: Operational 00:29:20.188 Entry Latency: Not Reported 00:29:20.188 Exit Latency: Not Reported 00:29:20.188 Relative Read Throughput: 0 00:29:20.188 Relative Read Latency: 0 00:29:20.188 Relative Write Throughput: 0 00:29:20.188 Relative Write Latency: 0 00:29:20.188 Idle Power: Not Reported 00:29:20.188 Active Power: Not Reported 00:29:20.188 Non-Operational Permissive Mode: Not Supported 00:29:20.188 00:29:20.188 Health Information 00:29:20.188 ================== 00:29:20.188 Critical Warnings: 00:29:20.188 Available Spare Space: OK 00:29:20.188 Temperature: OK 00:29:20.188 Device Reliability: OK 00:29:20.188 Read Only: No 00:29:20.188 Volatile Memory Backup: OK 00:29:20.188 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:20.188 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:20.188 Available Spare: 0% 00:29:20.188 Available Spare Threshold: 0% 00:29:20.188 Life Percentage Used:[2024-11-19 03:10:30.658003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.188 [2024-11-19 03:10:30.658016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xbf7650) 00:29:20.188 [2024-11-19 03:10:30.658028] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.188 [2024-11-19 03:10:30.658052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc529c0, cid 7, qid 0 00:29:20.188 [2024-11-19 03:10:30.658137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.188 [2024-11-19 03:10:30.658151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.188 [2024-11-19 03:10:30.658158] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.188 [2024-11-19 03:10:30.658164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc529c0) on tqpair=0xbf7650 00:29:20.188 [2024-11-19 03:10:30.658207] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:20.188 [2024-11-19 03:10:30.658230] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc51f40) on tqpair=0xbf7650 00:29:20.188 [2024-11-19 03:10:30.658242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.189 [2024-11-19 03:10:30.658251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc520c0) on tqpair=0xbf7650 00:29:20.189 [2024-11-19 03:10:30.658259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.189 [2024-11-19 03:10:30.658267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc52240) on tqpair=0xbf7650 00:29:20.189 [2024-11-19 03:10:30.658275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.189 [2024-11-19 03:10:30.658283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.189 [2024-11-19 03:10:30.658291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.189 [2024-11-19 03:10:30.658303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.658311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.658318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.189 [2024-11-19 03:10:30.658328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.189 [2024-11-19 03:10:30.658365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.189 [2024-11-19 03:10:30.658517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.189 [2024-11-19 03:10:30.658530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.189 [2024-11-19 03:10:30.658537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.658543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.189 [2024-11-19 03:10:30.658554] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.658562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.658568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.189 [2024-11-19 03:10:30.658579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.189 [2024-11-19 03:10:30.658605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.189 [2024-11-19 03:10:30.658701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.189 [2024-11-19 03:10:30.658715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.189 [2024-11-19 03:10:30.658721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.658728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.189 [2024-11-19 03:10:30.658736] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:20.189 [2024-11-19 03:10:30.658743] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:20.189 [2024-11-19 03:10:30.658759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.658768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.658774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.189 [2024-11-19 03:10:30.658785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.189 [2024-11-19 03:10:30.658806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.189 [2024-11-19 03:10:30.658886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.189 [2024-11-19 03:10:30.658903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.189 [2024-11-19 03:10:30.658911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.658918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.189 [2024-11-19 03:10:30.658935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.658944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.658951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.189 [2024-11-19 03:10:30.658961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.189 [2024-11-19 03:10:30.658982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.189 [2024-11-19 03:10:30.659063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.189 [2024-11-19 03:10:30.659077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.189 [2024-11-19 03:10:30.659084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.189 [2024-11-19 03:10:30.659106] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659115] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.189 [2024-11-19 03:10:30.659132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.189 [2024-11-19 03:10:30.659153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.189 [2024-11-19 03:10:30.659222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.189 [2024-11-19 03:10:30.659234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.189 [2024-11-19 03:10:30.659241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.189 [2024-11-19 03:10:30.659262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.189 [2024-11-19 03:10:30.659288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.189 [2024-11-19 03:10:30.659309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.189 [2024-11-19 03:10:30.659380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.189 [2024-11-19 03:10:30.659394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.189 [2024-11-19 03:10:30.659401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.189 [2024-11-19 03:10:30.659423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.189 [2024-11-19 03:10:30.659449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.189 [2024-11-19 03:10:30.659470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.189 [2024-11-19 03:10:30.659541] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.189 [2024-11-19 03:10:30.659555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.189 [2024-11-19 03:10:30.659565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.189 [2024-11-19 03:10:30.659589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.189 [2024-11-19 03:10:30.659614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.189 [2024-11-19 03:10:30.659635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.189 [2024-11-19 03:10:30.659719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.189 [2024-11-19 03:10:30.659733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.189 [2024-11-19 03:10:30.659740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.189 [2024-11-19 03:10:30.659763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.189 [2024-11-19 03:10:30.659789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.189 [2024-11-19 03:10:30.659810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.189 [2024-11-19 03:10:30.659888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.189 [2024-11-19 03:10:30.659900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.189 [2024-11-19 03:10:30.659906] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.189 [2024-11-19 03:10:30.659928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.659944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.189 [2024-11-19 03:10:30.659954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.189 [2024-11-19 03:10:30.659975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.189 [2024-11-19 03:10:30.660046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.189 [2024-11-19 03:10:30.660060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.189 [2024-11-19 03:10:30.660067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.660073] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.189 [2024-11-19 03:10:30.660089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.660098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.189 [2024-11-19 03:10:30.660105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.189 [2024-11-19 03:10:30.660115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.189 [2024-11-19 03:10:30.660136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.189 [2024-11-19 03:10:30.660207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.189 [2024-11-19 03:10:30.660221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.190 [2024-11-19 03:10:30.660227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.660234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.190 [2024-11-19 03:10:30.660255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.660266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.660272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.190 [2024-11-19 03:10:30.660282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.190 [2024-11-19 03:10:30.660303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.190 [2024-11-19 03:10:30.660384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.190 [2024-11-19 03:10:30.660398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.190 [2024-11-19 03:10:30.660404] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.660411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.190 [2024-11-19 03:10:30.660427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.660436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.660443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.190 [2024-11-19 03:10:30.660453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.190 [2024-11-19 03:10:30.660474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.190 [2024-11-19 03:10:30.660545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.190 [2024-11-19 03:10:30.660557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.190 [2024-11-19 03:10:30.660563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.660570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.190 [2024-11-19 03:10:30.660586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.660595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.660601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.190 [2024-11-19 03:10:30.660612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.190 [2024-11-19 03:10:30.660632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.190 [2024-11-19 03:10:30.660715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.190 [2024-11-19 03:10:30.660729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.190 [2024-11-19 03:10:30.660736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.660743] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.190 [2024-11-19 03:10:30.660759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.660768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.660775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.190 [2024-11-19 03:10:30.660785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.190 [2024-11-19 03:10:30.660806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.190 [2024-11-19 03:10:30.660880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.190 [2024-11-19 03:10:30.660894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.190 [2024-11-19 03:10:30.660900] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.660907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.190 [2024-11-19 03:10:30.660922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.660935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.660943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.190 [2024-11-19 03:10:30.660953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.190 [2024-11-19 03:10:30.660974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.190 [2024-11-19 03:10:30.661053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.190 [2024-11-19 03:10:30.661066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.190 [2024-11-19 03:10:30.661073] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.661079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.190 [2024-11-19 03:10:30.661095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.661104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.661111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.190 [2024-11-19 03:10:30.661121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.190 [2024-11-19 03:10:30.661142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.190 [2024-11-19 03:10:30.661216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.190 [2024-11-19 03:10:30.661230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.190 [2024-11-19 03:10:30.661236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.661243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.190 [2024-11-19 03:10:30.661259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.661268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.661274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.190 [2024-11-19 03:10:30.661284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.190 [2024-11-19 03:10:30.661305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.190 [2024-11-19 03:10:30.661381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.190 [2024-11-19 03:10:30.661394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.190 [2024-11-19 03:10:30.661401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.661407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.190 [2024-11-19 03:10:30.661423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.661433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.661439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.190 [2024-11-19 03:10:30.661449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.190 [2024-11-19 03:10:30.661470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.190 [2024-11-19 03:10:30.661545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.190 [2024-11-19 03:10:30.661557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.190 [2024-11-19 03:10:30.661564] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.661570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.190 [2024-11-19 03:10:30.661586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.661595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.661606] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.190 [2024-11-19 03:10:30.661617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.190 [2024-11-19 03:10:30.661638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.190 [2024-11-19 03:10:30.665702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.190 [2024-11-19 03:10:30.665718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.190 [2024-11-19 03:10:30.665725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.665731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.190 [2024-11-19 03:10:30.665748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.665757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.665764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7650) 00:29:20.190 [2024-11-19 03:10:30.665774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.190 [2024-11-19 03:10:30.665796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc523c0, cid 3, qid 0 00:29:20.190 [2024-11-19 03:10:30.665915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.190 [2024-11-19 03:10:30.665929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.190 [2024-11-19 03:10:30.665936] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.190 [2024-11-19 03:10:30.665942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc523c0) on tqpair=0xbf7650 00:29:20.190 [2024-11-19 03:10:30.665955] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:29:20.190 0% 00:29:20.190 Data Units Read: 0 00:29:20.190 Data Units Written: 0 00:29:20.190 Host Read Commands: 0 00:29:20.190 Host Write Commands: 0 00:29:20.190 Controller Busy Time: 0 minutes 00:29:20.190 Power Cycles: 0 00:29:20.190 Power On Hours: 0 hours 00:29:20.190 Unsafe Shutdowns: 0 00:29:20.190 Unrecoverable Media Errors: 0 00:29:20.190 Lifetime Error Log Entries: 0 00:29:20.190 Warning Temperature Time: 0 minutes 00:29:20.190 Critical Temperature Time: 0 minutes 00:29:20.190 00:29:20.190 Number of Queues 00:29:20.190 ================ 00:29:20.190 Number of I/O Submission Queues: 127 00:29:20.190 Number of I/O Completion Queues: 127 00:29:20.190 00:29:20.190 Active Namespaces 00:29:20.190 ================= 00:29:20.190 Namespace ID:1 00:29:20.190 Error Recovery Timeout: Unlimited 00:29:20.190 Command Set Identifier: NVM (00h) 00:29:20.190 Deallocate: Supported 00:29:20.190 Deallocated/Unwritten Error: Not Supported 00:29:20.190 Deallocated Read Value: Unknown 00:29:20.190 Deallocate in Write Zeroes: Not Supported 00:29:20.190 Deallocated Guard Field: 0xFFFF 00:29:20.191 Flush: Supported 00:29:20.191 Reservation: Supported 00:29:20.191 Namespace Sharing Capabilities: Multiple Controllers 00:29:20.191 Size (in LBAs): 131072 (0GiB) 00:29:20.191 Capacity (in LBAs): 131072 (0GiB) 00:29:20.191 Utilization (in LBAs): 131072 (0GiB) 00:29:20.191 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:20.191 EUI64: ABCDEF0123456789 00:29:20.191 UUID: 5065b88a-cf24-4ae5-bd00-a751977965d3 00:29:20.191 Thin Provisioning: Not Supported 00:29:20.191 Per-NS Atomic Units: Yes 00:29:20.191 Atomic Boundary Size (Normal): 0 00:29:20.191 Atomic Boundary Size (PFail): 0 00:29:20.191 Atomic Boundary Offset: 0 00:29:20.191 Maximum Single Source Range Length: 65535 00:29:20.191 Maximum Copy Length: 65535 00:29:20.191 Maximum Source Range Count: 1 00:29:20.191 NGUID/EUI64 Never Reused: No 00:29:20.191 Namespace Write Protected: No 00:29:20.191 Number of LBA Formats: 1 00:29:20.191 Current LBA Format: LBA Format #00 00:29:20.191 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:20.191 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:20.191 rmmod nvme_tcp 00:29:20.191 rmmod nvme_fabrics 00:29:20.191 rmmod nvme_keyring 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 339852 ']' 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 339852 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 339852 ']' 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 339852 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 339852 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 339852' 00:29:20.191 killing process with pid 339852 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 339852 00:29:20.191 03:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 339852 00:29:20.451 03:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:20.451 03:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:20.451 03:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:20.451 03:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:20.451 03:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:20.451 03:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:20.451 03:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:20.451 03:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:20.451 03:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:20.451 03:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.451 03:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.451 03:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:22.985 00:29:22.985 real 0m5.778s 00:29:22.985 user 0m5.246s 00:29:22.985 sys 0m2.002s 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:22.985 ************************************ 00:29:22.985 END TEST nvmf_identify 00:29:22.985 ************************************ 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.985 ************************************ 00:29:22.985 START TEST nvmf_perf 00:29:22.985 ************************************ 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:22.985 * Looking for test storage... 00:29:22.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:22.985 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:22.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.986 --rc genhtml_branch_coverage=1 00:29:22.986 --rc genhtml_function_coverage=1 00:29:22.986 --rc genhtml_legend=1 00:29:22.986 --rc geninfo_all_blocks=1 00:29:22.986 --rc geninfo_unexecuted_blocks=1 00:29:22.986 00:29:22.986 ' 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:22.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.986 --rc genhtml_branch_coverage=1 00:29:22.986 --rc genhtml_function_coverage=1 00:29:22.986 --rc genhtml_legend=1 00:29:22.986 --rc geninfo_all_blocks=1 00:29:22.986 --rc geninfo_unexecuted_blocks=1 00:29:22.986 00:29:22.986 ' 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:22.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.986 --rc genhtml_branch_coverage=1 00:29:22.986 --rc genhtml_function_coverage=1 00:29:22.986 --rc genhtml_legend=1 00:29:22.986 --rc geninfo_all_blocks=1 00:29:22.986 --rc geninfo_unexecuted_blocks=1 00:29:22.986 00:29:22.986 ' 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:22.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.986 --rc genhtml_branch_coverage=1 00:29:22.986 --rc genhtml_function_coverage=1 00:29:22.986 --rc genhtml_legend=1 00:29:22.986 --rc geninfo_all_blocks=1 00:29:22.986 --rc geninfo_unexecuted_blocks=1 00:29:22.986 00:29:22.986 ' 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:22.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:22.986 03:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.889 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:24.890 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:24.890 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:24.890 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:24.890 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:29:24.890 00:29:24.890 --- 10.0.0.2 ping statistics --- 00:29:24.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.890 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:29:24.890 00:29:24.890 --- 10.0.0.1 ping statistics --- 00:29:24.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.890 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:24.890 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:25.163 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:25.163 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:25.163 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:25.163 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:25.163 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=341938 00:29:25.163 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:25.163 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 341938 00:29:25.163 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 341938 ']' 00:29:25.163 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.163 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.163 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.163 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.163 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:25.163 [2024-11-19 03:10:35.573460] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:25.163 [2024-11-19 03:10:35.573549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.163 [2024-11-19 03:10:35.647263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.163 [2024-11-19 03:10:35.697318] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.163 [2024-11-19 03:10:35.697370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.163 [2024-11-19 03:10:35.697393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.163 [2024-11-19 03:10:35.697403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.163 [2024-11-19 03:10:35.697413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.163 [2024-11-19 03:10:35.699186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.163 [2024-11-19 03:10:35.699249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.163 [2024-11-19 03:10:35.699317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:25.163 [2024-11-19 03:10:35.699319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.421 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.421 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:29:25.421 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:25.421 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:25.421 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:25.421 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.421 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:25.421 03:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:28.703 03:10:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:28.703 03:10:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:28.703 03:10:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:29:28.703 03:10:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:28.973 03:10:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:28.973 03:10:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:29:28.973 03:10:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:28.973 03:10:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:28.973 03:10:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:29.231 [2024-11-19 03:10:39.797993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.231 03:10:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:29.489 03:10:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:29.489 03:10:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:30.055 03:10:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:30.055 03:10:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:30.313 03:10:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.313 [2024-11-19 03:10:40.930079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.570 03:10:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:30.828 03:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:29:30.828 03:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:30.828 03:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:30.828 03:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:32.201 Initializing NVMe Controllers 00:29:32.201 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:29:32.201 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:29:32.201 Initialization complete. Launching workers. 00:29:32.201 ======================================================== 00:29:32.201 Latency(us) 00:29:32.201 Device Information : IOPS MiB/s Average min max 00:29:32.201 PCIE (0000:88:00.0) NSID 1 from core 0: 85350.88 333.40 374.43 37.85 4316.00 00:29:32.201 ======================================================== 00:29:32.201 Total : 85350.88 333.40 374.43 37.85 4316.00 00:29:32.201 00:29:32.201 03:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:33.133 Initializing NVMe Controllers 00:29:33.133 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:33.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:33.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:33.133 Initialization complete. Launching workers. 00:29:33.133 ======================================================== 00:29:33.133 Latency(us) 00:29:33.133 Device Information : IOPS MiB/s Average min max 00:29:33.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 94.00 0.37 10686.47 136.50 46047.68 00:29:33.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17937.29 7945.45 47901.09 00:29:33.133 ======================================================== 00:29:33.133 Total : 150.00 0.59 13393.45 136.50 47901.09 00:29:33.133 00:29:33.391 03:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:34.764 Initializing NVMe Controllers 00:29:34.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:34.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:34.764 Initialization complete. Launching workers. 00:29:34.764 ======================================================== 00:29:34.764 Latency(us) 00:29:34.764 Device Information : IOPS MiB/s Average min max 00:29:34.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8532.99 33.33 3765.94 634.16 7731.31 00:29:34.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3855.00 15.06 8339.86 4596.02 15993.47 00:29:34.764 ======================================================== 00:29:34.764 Total : 12387.99 48.39 5189.29 634.16 15993.47 00:29:34.764 00:29:34.764 03:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:34.764 03:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:34.764 03:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:37.293 Initializing NVMe Controllers 00:29:37.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.293 Controller IO queue size 128, less than required. 00:29:37.293 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:37.293 Controller IO queue size 128, less than required. 00:29:37.293 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:37.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:37.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:37.293 Initialization complete. Launching workers. 00:29:37.293 ======================================================== 00:29:37.293 Latency(us) 00:29:37.293 Device Information : IOPS MiB/s Average min max 00:29:37.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1453.43 363.36 90006.45 58970.03 143464.95 00:29:37.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 530.24 132.56 250772.15 88805.51 414768.27 00:29:37.293 ======================================================== 00:29:37.293 Total : 1983.67 495.92 132979.81 58970.03 414768.27 00:29:37.293 00:29:37.293 03:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:37.293 No valid NVMe controllers or AIO or URING devices found 00:29:37.293 Initializing NVMe Controllers 00:29:37.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.293 Controller IO queue size 128, less than required. 00:29:37.293 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:37.293 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:37.293 Controller IO queue size 128, less than required. 00:29:37.293 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:37.293 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:37.293 WARNING: Some requested NVMe devices were skipped 00:29:37.293 03:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:39.836 Initializing NVMe Controllers 00:29:39.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.836 Controller IO queue size 128, less than required. 00:29:39.836 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.836 Controller IO queue size 128, less than required. 00:29:39.836 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:39.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:39.836 Initialization complete. Launching workers. 00:29:39.836 00:29:39.836 ==================== 00:29:39.836 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:39.836 TCP transport: 00:29:39.836 polls: 10371 00:29:39.836 idle_polls: 7133 00:29:39.836 sock_completions: 3238 00:29:39.836 nvme_completions: 6105 00:29:39.836 submitted_requests: 9176 00:29:39.836 queued_requests: 1 00:29:39.836 00:29:39.836 ==================== 00:29:39.836 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:39.836 TCP transport: 00:29:39.836 polls: 13928 00:29:39.836 idle_polls: 10290 00:29:39.836 sock_completions: 3638 00:29:39.836 nvme_completions: 6501 00:29:39.836 submitted_requests: 9810 00:29:39.836 queued_requests: 1 00:29:39.836 ======================================================== 00:29:39.836 Latency(us) 00:29:39.836 Device Information : IOPS MiB/s Average min max 00:29:39.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1522.87 380.72 85576.80 62026.12 151922.85 00:29:39.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1621.67 405.42 80599.10 41067.39 126547.49 00:29:39.836 ======================================================== 00:29:39.836 Total : 3144.54 786.14 83009.75 41067.39 151922.85 00:29:39.836 00:29:39.836 03:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:39.836 03:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:40.094 03:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:40.094 03:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:29:40.094 03:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:43.373 03:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=91c234cd-b8c3-4247-a459-a9cd77006699 00:29:43.373 03:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 91c234cd-b8c3-4247-a459-a9cd77006699 00:29:43.373 03:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=91c234cd-b8c3-4247-a459-a9cd77006699 00:29:43.373 03:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:29:43.373 03:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:29:43.373 03:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:29:43.373 03:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:43.632 03:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:29:43.632 { 00:29:43.632 "uuid": "91c234cd-b8c3-4247-a459-a9cd77006699", 00:29:43.632 "name": "lvs_0", 00:29:43.632 "base_bdev": "Nvme0n1", 00:29:43.632 "total_data_clusters": 238234, 00:29:43.632 "free_clusters": 238234, 00:29:43.632 "block_size": 512, 00:29:43.632 "cluster_size": 4194304 00:29:43.632 } 00:29:43.632 ]' 00:29:43.632 03:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="91c234cd-b8c3-4247-a459-a9cd77006699") .free_clusters' 00:29:43.632 03:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:29:43.632 03:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="91c234cd-b8c3-4247-a459-a9cd77006699") .cluster_size' 00:29:43.632 03:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:29:43.632 03:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:29:43.632 03:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:29:43.632 952936 00:29:43.632 03:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:43.632 03:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:43.632 03:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 91c234cd-b8c3-4247-a459-a9cd77006699 lbd_0 20480 00:29:44.197 03:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=b6e4b165-5e5d-42f5-b511-f6bacbfa44bd 00:29:44.197 03:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore b6e4b165-5e5d-42f5-b511-f6bacbfa44bd lvs_n_0 00:29:45.130 03:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=a051be83-ff99-4d57-b979-be9092feb6dd 00:29:45.130 03:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb a051be83-ff99-4d57-b979-be9092feb6dd 00:29:45.130 03:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=a051be83-ff99-4d57-b979-be9092feb6dd 00:29:45.130 03:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:29:45.130 03:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:29:45.130 03:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:29:45.130 03:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:45.388 03:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:29:45.388 { 00:29:45.388 "uuid": "91c234cd-b8c3-4247-a459-a9cd77006699", 00:29:45.388 "name": "lvs_0", 00:29:45.388 "base_bdev": "Nvme0n1", 00:29:45.388 "total_data_clusters": 238234, 00:29:45.388 "free_clusters": 233114, 00:29:45.388 "block_size": 512, 00:29:45.388 "cluster_size": 4194304 00:29:45.388 }, 00:29:45.388 { 00:29:45.388 "uuid": "a051be83-ff99-4d57-b979-be9092feb6dd", 00:29:45.388 "name": "lvs_n_0", 00:29:45.388 "base_bdev": "b6e4b165-5e5d-42f5-b511-f6bacbfa44bd", 00:29:45.388 "total_data_clusters": 5114, 00:29:45.388 "free_clusters": 5114, 00:29:45.388 "block_size": 512, 00:29:45.388 "cluster_size": 4194304 00:29:45.388 } 00:29:45.388 ]' 00:29:45.388 03:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="a051be83-ff99-4d57-b979-be9092feb6dd") .free_clusters' 00:29:45.388 03:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:29:45.388 03:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="a051be83-ff99-4d57-b979-be9092feb6dd") .cluster_size' 00:29:45.388 03:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:29:45.388 03:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:29:45.388 03:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:29:45.388 20456 00:29:45.388 03:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:45.388 03:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a051be83-ff99-4d57-b979-be9092feb6dd lbd_nest_0 20456 00:29:45.646 03:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=d84cd92e-a380-4f82-9775-7d32ae6676a5 00:29:45.646 03:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:45.904 03:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:45.904 03:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 d84cd92e-a380-4f82-9775-7d32ae6676a5 00:29:46.162 03:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:46.421 03:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:46.421 03:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:46.421 03:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:46.421 03:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:46.421 03:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:58.610 Initializing NVMe Controllers 00:29:58.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:58.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:58.610 Initialization complete. Launching workers. 00:29:58.610 ======================================================== 00:29:58.610 Latency(us) 00:29:58.610 Device Information : IOPS MiB/s Average min max 00:29:58.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.58 0.02 22013.96 168.45 45826.90 00:29:58.610 ======================================================== 00:29:58.610 Total : 45.58 0.02 22013.96 168.45 45826.90 00:29:58.610 00:29:58.610 03:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:58.610 03:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:08.574 Initializing NVMe Controllers 00:30:08.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:08.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:08.574 Initialization complete. Launching workers. 00:30:08.574 ======================================================== 00:30:08.574 Latency(us) 00:30:08.574 Device Information : IOPS MiB/s Average min max 00:30:08.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 75.87 9.48 13189.78 5039.49 47903.76 00:30:08.574 ======================================================== 00:30:08.574 Total : 75.87 9.48 13189.78 5039.49 47903.76 00:30:08.574 00:30:08.574 03:11:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:08.574 03:11:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:08.574 03:11:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:18.538 Initializing NVMe Controllers 00:30:18.538 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:18.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:18.538 Initialization complete. Launching workers. 00:30:18.538 ======================================================== 00:30:18.538 Latency(us) 00:30:18.538 Device Information : IOPS MiB/s Average min max 00:30:18.538 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7200.70 3.52 4443.55 326.38 11117.72 00:30:18.538 ======================================================== 00:30:18.538 Total : 7200.70 3.52 4443.55 326.38 11117.72 00:30:18.538 00:30:18.538 03:11:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:18.538 03:11:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:28.516 Initializing NVMe Controllers 00:30:28.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:28.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:28.516 Initialization complete. Launching workers. 00:30:28.516 ======================================================== 00:30:28.516 Latency(us) 00:30:28.516 Device Information : IOPS MiB/s Average min max 00:30:28.516 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3926.99 490.87 8152.27 661.26 16005.91 00:30:28.516 ======================================================== 00:30:28.516 Total : 3926.99 490.87 8152.27 661.26 16005.91 00:30:28.516 00:30:28.516 03:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:28.516 03:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:28.516 03:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:38.481 Initializing NVMe Controllers 00:30:38.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:38.481 Controller IO queue size 128, less than required. 00:30:38.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:38.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:38.481 Initialization complete. Launching workers. 00:30:38.481 ======================================================== 00:30:38.481 Latency(us) 00:30:38.481 Device Information : IOPS MiB/s Average min max 00:30:38.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11664.48 5.70 10975.19 1448.34 27441.97 00:30:38.481 ======================================================== 00:30:38.481 Total : 11664.48 5.70 10975.19 1448.34 27441.97 00:30:38.481 00:30:38.481 03:11:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:38.481 03:11:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:50.684 Initializing NVMe Controllers 00:30:50.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:50.684 Controller IO queue size 128, less than required. 00:30:50.684 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:50.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:50.684 Initialization complete. Launching workers. 00:30:50.685 ======================================================== 00:30:50.685 Latency(us) 00:30:50.685 Device Information : IOPS MiB/s Average min max 00:30:50.685 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1191.48 148.94 107842.14 23859.94 238116.10 00:30:50.685 ======================================================== 00:30:50.685 Total : 1191.48 148.94 107842.14 23859.94 238116.10 00:30:50.685 00:30:50.685 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:50.685 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d84cd92e-a380-4f82-9775-7d32ae6676a5 00:30:50.685 03:12:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:50.685 03:12:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b6e4b165-5e5d-42f5-b511-f6bacbfa44bd 00:30:50.685 03:12:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:50.685 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:50.685 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:50.685 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:50.685 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:30:50.685 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:50.685 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:30:50.685 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:50.685 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:50.685 rmmod nvme_tcp 00:30:50.685 rmmod nvme_fabrics 00:30:50.685 rmmod nvme_keyring 00:30:50.944 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:50.944 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:30:50.944 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:30:50.944 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 341938 ']' 00:30:50.944 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 341938 00:30:50.944 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 341938 ']' 00:30:50.944 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 341938 00:30:50.944 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:30:50.944 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:50.944 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 341938 00:30:50.944 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:50.944 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:50.944 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 341938' 00:30:50.944 killing process with pid 341938 00:30:50.944 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 341938 00:30:50.944 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 341938 00:30:52.318 03:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:52.318 03:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:52.318 03:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:52.318 03:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:30:52.318 03:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:30:52.318 03:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:52.318 03:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:30:52.318 03:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:52.318 03:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:52.318 03:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.318 03:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.318 03:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.854 03:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:54.854 00:30:54.854 real 1m31.848s 00:30:54.855 user 5m39.669s 00:30:54.855 sys 0m15.710s 00:30:54.855 03:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:54.855 03:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:54.855 ************************************ 00:30:54.855 END TEST nvmf_perf 00:30:54.855 ************************************ 00:30:54.855 03:12:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:54.855 03:12:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:54.855 03:12:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:54.855 03:12:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.855 ************************************ 00:30:54.855 START TEST nvmf_fio_host 00:30:54.855 ************************************ 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:54.855 * Looking for test storage... 00:30:54.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:54.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.855 --rc genhtml_branch_coverage=1 00:30:54.855 --rc genhtml_function_coverage=1 00:30:54.855 --rc genhtml_legend=1 00:30:54.855 --rc geninfo_all_blocks=1 00:30:54.855 --rc geninfo_unexecuted_blocks=1 00:30:54.855 00:30:54.855 ' 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:54.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.855 --rc genhtml_branch_coverage=1 00:30:54.855 --rc genhtml_function_coverage=1 00:30:54.855 --rc genhtml_legend=1 00:30:54.855 --rc geninfo_all_blocks=1 00:30:54.855 --rc geninfo_unexecuted_blocks=1 00:30:54.855 00:30:54.855 ' 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:54.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.855 --rc genhtml_branch_coverage=1 00:30:54.855 --rc genhtml_function_coverage=1 00:30:54.855 --rc genhtml_legend=1 00:30:54.855 --rc geninfo_all_blocks=1 00:30:54.855 --rc geninfo_unexecuted_blocks=1 00:30:54.855 00:30:54.855 ' 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:54.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.855 --rc genhtml_branch_coverage=1 00:30:54.855 --rc genhtml_function_coverage=1 00:30:54.855 --rc genhtml_legend=1 00:30:54.855 --rc geninfo_all_blocks=1 00:30:54.855 --rc geninfo_unexecuted_blocks=1 00:30:54.855 00:30:54.855 ' 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.855 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:54.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:30:54.856 03:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:56.759 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.759 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:56.760 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:56.760 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:56.760 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:56.760 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:57.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:57.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:30:57.020 00:30:57.020 --- 10.0.0.2 ping statistics --- 00:30:57.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.020 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:57.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:57.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:30:57.020 00:30:57.020 --- 10.0.0.1 ping statistics --- 00:30:57.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.020 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=354379 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 354379 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 354379 ']' 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:57.020 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.020 [2024-11-19 03:12:07.577516] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:30:57.020 [2024-11-19 03:12:07.577591] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.278 [2024-11-19 03:12:07.658732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:57.278 [2024-11-19 03:12:07.709080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.278 [2024-11-19 03:12:07.709136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.279 [2024-11-19 03:12:07.709160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:57.279 [2024-11-19 03:12:07.709171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:57.279 [2024-11-19 03:12:07.709180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.279 [2024-11-19 03:12:07.710721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.279 [2024-11-19 03:12:07.710791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:57.279 [2024-11-19 03:12:07.710840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:57.279 [2024-11-19 03:12:07.710843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.279 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:57.279 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:30:57.279 03:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:57.537 [2024-11-19 03:12:08.091718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.537 03:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:57.537 03:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:57.537 03:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.537 03:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:58.104 Malloc1 00:30:58.104 03:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:58.363 03:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:58.621 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.880 [2024-11-19 03:12:09.279040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.880 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:59.138 03:12:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:59.396 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:59.396 fio-3.35 00:30:59.396 Starting 1 thread 00:31:01.926 00:31:01.926 test: (groupid=0, jobs=1): err= 0: pid=355132: Tue Nov 19 03:12:12 2024 00:31:01.926 read: IOPS=7594, BW=29.7MiB/s (31.1MB/s)(59.5MiB/2007msec) 00:31:01.926 slat (nsec): min=1886, max=126549, avg=2511.24, stdev=1800.04 00:31:01.926 clat (usec): min=3120, max=15197, avg=9258.49, stdev=758.38 00:31:01.926 lat (usec): min=3141, max=15213, avg=9261.01, stdev=758.31 00:31:01.926 clat percentiles (usec): 00:31:01.926 | 1.00th=[ 7635], 5.00th=[ 8094], 10.00th=[ 8356], 20.00th=[ 8717], 00:31:01.926 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:31:01.926 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10421], 00:31:01.926 | 99.00th=[11076], 99.50th=[11207], 99.90th=[13698], 99.95th=[15008], 00:31:01.926 | 99.99th=[15139] 00:31:01.926 bw ( KiB/s): min=28680, max=31688, per=99.88%, avg=30342.00, stdev=1272.15, samples=4 00:31:01.926 iops : min= 7170, max= 7922, avg=7585.50, stdev=318.04, samples=4 00:31:01.926 write: IOPS=7583, BW=29.6MiB/s (31.1MB/s)(59.5MiB/2007msec); 0 zone resets 00:31:01.926 slat (usec): min=2, max=125, avg= 2.63, stdev= 1.57 00:31:01.926 clat (usec): min=1326, max=13604, avg=7489.25, stdev=666.85 00:31:01.926 lat (usec): min=1333, max=13606, avg=7491.88, stdev=666.86 00:31:01.926 clat percentiles (usec): 00:31:01.926 | 1.00th=[ 6063], 5.00th=[ 6587], 10.00th=[ 6718], 20.00th=[ 6980], 00:31:01.926 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7635], 00:31:01.926 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8291], 95.00th=[ 8586], 00:31:01.926 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[11863], 99.95th=[13304], 00:31:01.926 | 99.99th=[13566] 00:31:01.926 bw ( KiB/s): min=29672, max=31168, per=99.98%, avg=30330.00, stdev=762.29, samples=4 00:31:01.926 iops : min= 7418, max= 7792, avg=7582.50, stdev=190.57, samples=4 00:31:01.926 lat (msec) : 2=0.02%, 4=0.11%, 10=92.44%, 20=7.44% 00:31:01.926 cpu : usr=60.07%, sys=38.53%, ctx=88, majf=0, minf=41 00:31:01.926 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:01.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:01.926 issued rwts: total=15242,15221,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.926 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:01.926 00:31:01.926 Run status group 0 (all jobs): 00:31:01.926 READ: bw=29.7MiB/s (31.1MB/s), 29.7MiB/s-29.7MiB/s (31.1MB/s-31.1MB/s), io=59.5MiB (62.4MB), run=2007-2007msec 00:31:01.926 WRITE: bw=29.6MiB/s (31.1MB/s), 29.6MiB/s-29.6MiB/s (31.1MB/s-31.1MB/s), io=59.5MiB (62.3MB), run=2007-2007msec 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:01.926 03:12:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:01.926 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:01.926 fio-3.35 00:31:01.926 Starting 1 thread 00:31:04.454 00:31:04.454 test: (groupid=0, jobs=1): err= 0: pid=355461: Tue Nov 19 03:12:14 2024 00:31:04.454 read: IOPS=7604, BW=119MiB/s (125MB/s)(239MiB/2010msec) 00:31:04.454 slat (usec): min=2, max=121, avg= 3.70, stdev= 1.85 00:31:04.454 clat (usec): min=1925, max=20941, avg=9505.81, stdev=2166.93 00:31:04.454 lat (usec): min=1928, max=20944, avg=9509.51, stdev=2166.97 00:31:04.454 clat percentiles (usec): 00:31:04.454 | 1.00th=[ 4883], 5.00th=[ 6063], 10.00th=[ 6849], 20.00th=[ 7767], 00:31:04.454 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[ 9896], 00:31:04.454 | 70.00th=[10421], 80.00th=[11207], 90.00th=[12387], 95.00th=[13173], 00:31:04.454 | 99.00th=[15795], 99.50th=[16712], 99.90th=[17695], 99.95th=[17957], 00:31:04.454 | 99.99th=[19268] 00:31:04.454 bw ( KiB/s): min=58208, max=72576, per=51.70%, avg=62904.00, stdev=6578.34, samples=4 00:31:04.454 iops : min= 3638, max= 4536, avg=3931.50, stdev=411.15, samples=4 00:31:04.454 write: IOPS=4471, BW=69.9MiB/s (73.3MB/s)(129MiB/1840msec); 0 zone resets 00:31:04.454 slat (usec): min=30, max=194, avg=33.80, stdev= 5.85 00:31:04.454 clat (usec): min=5102, max=22412, avg=12915.32, stdev=2451.91 00:31:04.454 lat (usec): min=5133, max=22443, avg=12949.12, stdev=2451.61 00:31:04.454 clat percentiles (usec): 00:31:04.454 | 1.00th=[ 7898], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10683], 00:31:04.454 | 30.00th=[11338], 40.00th=[12256], 50.00th=[12911], 60.00th=[13698], 00:31:04.454 | 70.00th=[14353], 80.00th=[15008], 90.00th=[16057], 95.00th=[16909], 00:31:04.454 | 99.00th=[18744], 99.50th=[19268], 99.90th=[21627], 99.95th=[22152], 00:31:04.454 | 99.99th=[22414] 00:31:04.454 bw ( KiB/s): min=59360, max=75776, per=91.56%, avg=65512.00, stdev=7135.23, samples=4 00:31:04.454 iops : min= 3710, max= 4736, avg=4094.50, stdev=445.95, samples=4 00:31:04.454 lat (msec) : 2=0.01%, 4=0.17%, 10=45.10%, 20=54.60%, 50=0.12% 00:31:04.454 cpu : usr=75.86%, sys=22.95%, ctx=55, majf=0, minf=61 00:31:04.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:31:04.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:04.454 issued rwts: total=15285,8228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.454 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:04.454 00:31:04.454 Run status group 0 (all jobs): 00:31:04.454 READ: bw=119MiB/s (125MB/s), 119MiB/s-119MiB/s (125MB/s-125MB/s), io=239MiB (250MB), run=2010-2010msec 00:31:04.454 WRITE: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=129MiB (135MB), run=1840-1840msec 00:31:04.454 03:12:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:04.712 03:12:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:04.712 03:12:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:04.712 03:12:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:04.712 03:12:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:04.712 03:12:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:04.712 03:12:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:04.712 03:12:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:04.712 03:12:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:04.712 03:12:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:04.712 03:12:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:31:04.712 03:12:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:07.988 Nvme0n1 00:31:07.988 03:12:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:11.265 03:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=ae966791-3444-4d50-8d6f-5a6aa54210ec 00:31:11.265 03:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb ae966791-3444-4d50-8d6f-5a6aa54210ec 00:31:11.265 03:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=ae966791-3444-4d50-8d6f-5a6aa54210ec 00:31:11.265 03:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:11.265 03:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:11.265 03:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:11.265 03:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:11.265 03:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:11.265 { 00:31:11.265 "uuid": "ae966791-3444-4d50-8d6f-5a6aa54210ec", 00:31:11.265 "name": "lvs_0", 00:31:11.265 "base_bdev": "Nvme0n1", 00:31:11.265 "total_data_clusters": 930, 00:31:11.265 "free_clusters": 930, 00:31:11.265 "block_size": 512, 00:31:11.265 "cluster_size": 1073741824 00:31:11.265 } 00:31:11.265 ]' 00:31:11.265 03:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="ae966791-3444-4d50-8d6f-5a6aa54210ec") .free_clusters' 00:31:11.265 03:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:31:11.266 03:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="ae966791-3444-4d50-8d6f-5a6aa54210ec") .cluster_size' 00:31:11.266 03:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:11.266 03:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:31:11.266 03:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:31:11.266 952320 00:31:11.266 03:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:11.523 ca6d1472-b7c1-4d74-bf04-4b5332a366b8 00:31:11.523 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:11.781 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:12.039 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:12.296 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:12.553 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:12.553 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:12.553 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:12.553 03:12:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:12.553 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:12.553 fio-3.35 00:31:12.553 Starting 1 thread 00:31:15.080 00:31:15.080 test: (groupid=0, jobs=1): err= 0: pid=356752: Tue Nov 19 03:12:25 2024 00:31:15.080 read: IOPS=5985, BW=23.4MiB/s (24.5MB/s)(46.9MiB/2007msec) 00:31:15.080 slat (nsec): min=1796, max=172101, avg=2316.65, stdev=2208.33 00:31:15.080 clat (usec): min=1232, max=171406, avg=11656.83, stdev=11670.19 00:31:15.080 lat (usec): min=1235, max=171443, avg=11659.15, stdev=11670.53 00:31:15.080 clat percentiles (msec): 00:31:15.080 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:31:15.080 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:31:15.080 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:31:15.080 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:31:15.080 | 99.99th=[ 171] 00:31:15.080 bw ( KiB/s): min=16832, max=26424, per=99.66%, avg=23860.00, stdev=4688.20, samples=4 00:31:15.080 iops : min= 4208, max= 6606, avg=5965.00, stdev=1172.05, samples=4 00:31:15.080 write: IOPS=5968, BW=23.3MiB/s (24.4MB/s)(46.8MiB/2007msec); 0 zone resets 00:31:15.080 slat (nsec): min=1936, max=135765, avg=2473.68, stdev=1603.80 00:31:15.080 clat (usec): min=249, max=169494, avg=9612.50, stdev=10940.58 00:31:15.080 lat (usec): min=252, max=169500, avg=9614.98, stdev=10940.89 00:31:15.080 clat percentiles (msec): 00:31:15.080 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:31:15.080 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:31:15.080 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:31:15.080 | 99.00th=[ 11], 99.50th=[ 15], 99.90th=[ 169], 99.95th=[ 169], 00:31:15.080 | 99.99th=[ 169] 00:31:15.080 bw ( KiB/s): min=17832, max=25984, per=99.97%, avg=23866.00, stdev=4023.46, samples=4 00:31:15.080 iops : min= 4458, max= 6496, avg=5966.50, stdev=1005.86, samples=4 00:31:15.080 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:31:15.080 lat (msec) : 2=0.03%, 4=0.10%, 10=57.07%, 20=42.25%, 250=0.53% 00:31:15.080 cpu : usr=64.36%, sys=34.40%, ctx=103, majf=0, minf=41 00:31:15.080 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:15.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:15.080 issued rwts: total=12013,11978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:15.080 00:31:15.080 Run status group 0 (all jobs): 00:31:15.080 READ: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=46.9MiB (49.2MB), run=2007-2007msec 00:31:15.080 WRITE: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=46.8MiB (49.1MB), run=2007-2007msec 00:31:15.080 03:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:15.338 03:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:16.711 03:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=1ee0acff-3af2-47ad-891d-304ce391eec8 00:31:16.711 03:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 1ee0acff-3af2-47ad-891d-304ce391eec8 00:31:16.711 03:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=1ee0acff-3af2-47ad-891d-304ce391eec8 00:31:16.711 03:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:16.711 03:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:16.711 03:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:16.711 03:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:16.711 03:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:16.711 { 00:31:16.711 "uuid": "ae966791-3444-4d50-8d6f-5a6aa54210ec", 00:31:16.711 "name": "lvs_0", 00:31:16.711 "base_bdev": "Nvme0n1", 00:31:16.711 "total_data_clusters": 930, 00:31:16.711 "free_clusters": 0, 00:31:16.711 "block_size": 512, 00:31:16.711 "cluster_size": 1073741824 00:31:16.711 }, 00:31:16.711 { 00:31:16.711 "uuid": "1ee0acff-3af2-47ad-891d-304ce391eec8", 00:31:16.711 "name": "lvs_n_0", 00:31:16.711 "base_bdev": "ca6d1472-b7c1-4d74-bf04-4b5332a366b8", 00:31:16.711 "total_data_clusters": 237847, 00:31:16.711 "free_clusters": 237847, 00:31:16.711 "block_size": 512, 00:31:16.711 "cluster_size": 4194304 00:31:16.711 } 00:31:16.711 ]' 00:31:16.711 03:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="1ee0acff-3af2-47ad-891d-304ce391eec8") .free_clusters' 00:31:16.711 03:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:31:16.711 03:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="1ee0acff-3af2-47ad-891d-304ce391eec8") .cluster_size' 00:31:16.711 03:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:16.711 03:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:31:16.711 03:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:31:16.711 951388 00:31:16.711 03:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:17.640 f8c7688e-2715-4f36-8cd2-d349df8bb8e8 00:31:17.640 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:17.897 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:18.153 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:18.410 03:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:18.668 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:18.668 fio-3.35 00:31:18.668 Starting 1 thread 00:31:21.195 00:31:21.195 test: (groupid=0, jobs=1): err= 0: pid=357594: Tue Nov 19 03:12:31 2024 00:31:21.195 read: IOPS=5677, BW=22.2MiB/s (23.3MB/s)(44.6MiB/2009msec) 00:31:21.195 slat (nsec): min=1981, max=189197, avg=2650.30, stdev=2603.27 00:31:21.195 clat (usec): min=4675, max=20714, avg=12272.20, stdev=1168.94 00:31:21.195 lat (usec): min=4688, max=20716, avg=12274.85, stdev=1168.82 00:31:21.195 clat percentiles (usec): 00:31:21.195 | 1.00th=[ 9634], 5.00th=[10552], 10.00th=[10814], 20.00th=[11338], 00:31:21.195 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:31:21.195 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13698], 95.00th=[14091], 00:31:21.195 | 99.00th=[15008], 99.50th=[15139], 99.90th=[19268], 99.95th=[19530], 00:31:21.195 | 99.99th=[20579] 00:31:21.195 bw ( KiB/s): min=21584, max=23224, per=99.89%, avg=22686.00, stdev=748.63, samples=4 00:31:21.195 iops : min= 5396, max= 5806, avg=5671.50, stdev=187.16, samples=4 00:31:21.195 write: IOPS=5647, BW=22.1MiB/s (23.1MB/s)(44.3MiB/2009msec); 0 zone resets 00:31:21.195 slat (usec): min=2, max=142, avg= 2.76, stdev= 1.81 00:31:21.195 clat (usec): min=2278, max=19492, avg=10117.69, stdev=970.49 00:31:21.195 lat (usec): min=2285, max=19495, avg=10120.45, stdev=970.47 00:31:21.195 clat percentiles (usec): 00:31:21.195 | 1.00th=[ 7963], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:31:21.195 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:31:21.195 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:31:21.195 | 99.00th=[12256], 99.50th=[12780], 99.90th=[17957], 99.95th=[18220], 00:31:21.195 | 99.99th=[19530] 00:31:21.195 bw ( KiB/s): min=22312, max=22976, per=99.89%, avg=22566.00, stdev=286.32, samples=4 00:31:21.195 iops : min= 5578, max= 5744, avg=5641.50, stdev=71.58, samples=4 00:31:21.195 lat (msec) : 4=0.05%, 10=23.14%, 20=76.81%, 50=0.01% 00:31:21.195 cpu : usr=62.60%, sys=36.01%, ctx=154, majf=0, minf=41 00:31:21.195 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:21.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:21.195 issued rwts: total=11407,11346,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:21.195 00:31:21.195 Run status group 0 (all jobs): 00:31:21.195 READ: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=44.6MiB (46.7MB), run=2009-2009msec 00:31:21.195 WRITE: bw=22.1MiB/s (23.1MB/s), 22.1MiB/s-22.1MiB/s (23.1MB/s-23.1MB/s), io=44.3MiB (46.5MB), run=2009-2009msec 00:31:21.195 03:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:21.195 03:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:21.195 03:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:25.374 03:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:25.374 03:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:28.651 03:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:28.651 03:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:30.550 rmmod nvme_tcp 00:31:30.550 rmmod nvme_fabrics 00:31:30.550 rmmod nvme_keyring 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 354379 ']' 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 354379 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 354379 ']' 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 354379 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354379 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354379' 00:31:30.550 killing process with pid 354379 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 354379 00:31:30.550 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 354379 00:31:30.809 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:30.809 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:30.809 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:30.809 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:30.809 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:31:30.809 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:30.809 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:31:30.809 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:30.809 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:30.809 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.809 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:30.809 03:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:33.347 00:31:33.347 real 0m38.353s 00:31:33.347 user 2m26.962s 00:31:33.347 sys 0m7.378s 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.347 ************************************ 00:31:33.347 END TEST nvmf_fio_host 00:31:33.347 ************************************ 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.347 ************************************ 00:31:33.347 START TEST nvmf_failover 00:31:33.347 ************************************ 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:33.347 * Looking for test storage... 00:31:33.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:33.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.347 --rc genhtml_branch_coverage=1 00:31:33.347 --rc genhtml_function_coverage=1 00:31:33.347 --rc genhtml_legend=1 00:31:33.347 --rc geninfo_all_blocks=1 00:31:33.347 --rc geninfo_unexecuted_blocks=1 00:31:33.347 00:31:33.347 ' 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:33.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.347 --rc genhtml_branch_coverage=1 00:31:33.347 --rc genhtml_function_coverage=1 00:31:33.347 --rc genhtml_legend=1 00:31:33.347 --rc geninfo_all_blocks=1 00:31:33.347 --rc geninfo_unexecuted_blocks=1 00:31:33.347 00:31:33.347 ' 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:33.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.347 --rc genhtml_branch_coverage=1 00:31:33.347 --rc genhtml_function_coverage=1 00:31:33.347 --rc genhtml_legend=1 00:31:33.347 --rc geninfo_all_blocks=1 00:31:33.347 --rc geninfo_unexecuted_blocks=1 00:31:33.347 00:31:33.347 ' 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:33.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.347 --rc genhtml_branch_coverage=1 00:31:33.347 --rc genhtml_function_coverage=1 00:31:33.347 --rc genhtml_legend=1 00:31:33.347 --rc geninfo_all_blocks=1 00:31:33.347 --rc geninfo_unexecuted_blocks=1 00:31:33.347 00:31:33.347 ' 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.347 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:33.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:33.348 03:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:35.251 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:35.251 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:35.251 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:35.251 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:35.251 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:35.251 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:35.251 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:35.251 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:35.252 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:35.252 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:35.252 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:35.252 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:35.252 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:35.253 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:35.253 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:35.253 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:35.253 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:35.253 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:35.253 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:35.253 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:35.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:35.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:31:35.253 00:31:35.253 --- 10.0.0.2 ping statistics --- 00:31:35.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.253 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:31:35.253 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:35.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:35.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:31:35.253 00:31:35.253 --- 10.0.0.1 ping statistics --- 00:31:35.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.253 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:31:35.253 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:35.253 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:31:35.253 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:35.253 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.253 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:35.253 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:35.253 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.253 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:35.253 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:35.512 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:35.512 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:35.512 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:35.512 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:35.512 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=360867 00:31:35.512 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:35.512 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 360867 00:31:35.512 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 360867 ']' 00:31:35.512 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.512 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:35.512 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.512 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:35.512 03:12:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:35.512 [2024-11-19 03:12:45.947250] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:31:35.512 [2024-11-19 03:12:45.947327] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.512 [2024-11-19 03:12:46.022132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:35.512 [2024-11-19 03:12:46.067916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.512 [2024-11-19 03:12:46.067970] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.512 [2024-11-19 03:12:46.067995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.512 [2024-11-19 03:12:46.068006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.512 [2024-11-19 03:12:46.068015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.512 [2024-11-19 03:12:46.069462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:35.512 [2024-11-19 03:12:46.072707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:35.512 [2024-11-19 03:12:46.072719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.771 03:12:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:35.771 03:12:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:35.771 03:12:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:35.771 03:12:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:35.771 03:12:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:35.771 03:12:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:35.771 03:12:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:36.029 [2024-11-19 03:12:46.454283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.029 03:12:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:36.288 Malloc0 00:31:36.288 03:12:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:36.547 03:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:36.805 03:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:37.063 [2024-11-19 03:12:47.562778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.063 03:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:37.322 [2024-11-19 03:12:47.827610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:37.322 03:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:37.581 [2024-11-19 03:12:48.092486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:37.581 03:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=361140 00:31:37.581 03:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:37.581 03:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:37.581 03:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 361140 /var/tmp/bdevperf.sock 00:31:37.581 03:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 361140 ']' 00:31:37.581 03:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:37.581 03:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:37.581 03:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:37.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:37.581 03:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:37.581 03:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:37.839 03:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:37.839 03:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:37.839 03:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:38.406 NVMe0n1 00:31:38.406 03:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:38.664 00:31:38.664 03:12:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=361272 00:31:38.664 03:12:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:38.664 03:12:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:40.035 03:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:40.035 03:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:43.314 03:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:43.572 00:31:43.572 03:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:43.830 [2024-11-19 03:12:54.337930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc05a0 is same with the state(6) to be set 00:31:43.830 [2024-11-19 03:12:54.338003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc05a0 is same with the state(6) to be set 00:31:43.830 [2024-11-19 03:12:54.338030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc05a0 is same with the state(6) to be set 00:31:43.830 [2024-11-19 03:12:54.338043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc05a0 is same with the state(6) to be set 00:31:43.830 [2024-11-19 03:12:54.338055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc05a0 is same with the state(6) to be set 00:31:43.830 [2024-11-19 03:12:54.338068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc05a0 is same with the state(6) to be set 00:31:43.830 [2024-11-19 03:12:54.338080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc05a0 is same with the state(6) to be set 00:31:43.830 [2024-11-19 03:12:54.338093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc05a0 is same with the state(6) to be set 00:31:43.830 [2024-11-19 03:12:54.338105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc05a0 is same with the state(6) to be set 00:31:43.830 [2024-11-19 03:12:54.338118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc05a0 is same with the state(6) to be set 00:31:43.830 03:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:47.110 03:12:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:47.110 [2024-11-19 03:12:57.652430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:47.110 03:12:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:48.483 03:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:48.483 [2024-11-19 03:12:58.943921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.943991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.483 [2024-11-19 03:12:58.944531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 [2024-11-19 03:12:58.944783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc14d0 is same with the state(6) to be set 00:31:48.484 03:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 361272 00:31:55.052 { 00:31:55.052 "results": [ 00:31:55.052 { 00:31:55.052 "job": "NVMe0n1", 00:31:55.052 "core_mask": "0x1", 00:31:55.052 "workload": "verify", 00:31:55.052 "status": "finished", 00:31:55.052 "verify_range": { 00:31:55.052 "start": 0, 00:31:55.052 "length": 16384 00:31:55.052 }, 00:31:55.052 "queue_depth": 128, 00:31:55.052 "io_size": 4096, 00:31:55.052 "runtime": 15.013856, 00:31:55.052 "iops": 8385.986917684571, 00:31:55.052 "mibps": 32.75776139720536, 00:31:55.052 "io_failed": 8780, 00:31:55.052 "io_timeout": 0, 00:31:55.052 "avg_latency_us": 14240.730601206316, 00:31:55.052 "min_latency_us": 533.997037037037, 00:31:55.052 "max_latency_us": 27962.02666666667 00:31:55.052 } 00:31:55.052 ], 00:31:55.052 "core_count": 1 00:31:55.052 } 00:31:55.052 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 361140 00:31:55.052 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 361140 ']' 00:31:55.052 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 361140 00:31:55.052 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:31:55.052 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:55.052 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 361140 00:31:55.052 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:55.053 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:55.053 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 361140' 00:31:55.053 killing process with pid 361140 00:31:55.053 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 361140 00:31:55.053 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 361140 00:31:55.053 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:55.053 [2024-11-19 03:12:48.160634] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:31:55.053 [2024-11-19 03:12:48.160757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361140 ] 00:31:55.053 [2024-11-19 03:12:48.234315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.053 [2024-11-19 03:12:48.282172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.053 Running I/O for 15 seconds... 00:31:55.053 8484.00 IOPS, 33.14 MiB/s [2024-11-19T02:13:05.668Z] [2024-11-19 03:12:50.498223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.053 [2024-11-19 03:12:50.498286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.498968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.498996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.499011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.499028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.499043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.499056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.499070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.499083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.499097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.499110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.499124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.499137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.499151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.499164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.499178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.499191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.499205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.499218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.499232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.499245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.499259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.499272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.499286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.499300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.053 [2024-11-19 03:12:50.499314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.053 [2024-11-19 03:12:50.499328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.499355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.499387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.499414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.499441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.499469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.499496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.499530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.054 [2024-11-19 03:12:50.499558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.054 [2024-11-19 03:12:50.499585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.054 [2024-11-19 03:12:50.499613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.054 [2024-11-19 03:12:50.499640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.054 [2024-11-19 03:12:50.499701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.054 [2024-11-19 03:12:50.499734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.054 [2024-11-19 03:12:50.499762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.499796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.499825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.499853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.499882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.499910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.499938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.499966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.499981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.500014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.500034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.500048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.500062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.500075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.500090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.500102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.500117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.500130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.500144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.500161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.500176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.500189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.500219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.500233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.500248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.500262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.500277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.500291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.500306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.500320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.500335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.500348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.500363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.500377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.500391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.500405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.500420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.500434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.500449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.054 [2024-11-19 03:12:50.500462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.500476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.054 [2024-11-19 03:12:50.500489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.054 [2024-11-19 03:12:50.500505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.055 [2024-11-19 03:12:50.500519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.500538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.055 [2024-11-19 03:12:50.500552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.500567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.055 [2024-11-19 03:12:50.500580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.500595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.055 [2024-11-19 03:12:50.500608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.500622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.055 [2024-11-19 03:12:50.500636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.500651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.055 [2024-11-19 03:12:50.500665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.500684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.055 [2024-11-19 03:12:50.500721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.500739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.055 [2024-11-19 03:12:50.500754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.500769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.055 [2024-11-19 03:12:50.500783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.500798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.500817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.500833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.500848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.500863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.500877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.500892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.500906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.500922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.500940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.500956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.500970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.055 [2024-11-19 03:12:50.501280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.055 [2024-11-19 03:12:50.501312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.055 [2024-11-19 03:12:50.501349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.055 [2024-11-19 03:12:50.501378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.055 [2024-11-19 03:12:50.501407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.055 [2024-11-19 03:12:50.501435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.055 [2024-11-19 03:12:50.501464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.055 [2024-11-19 03:12:50.501744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.055 [2024-11-19 03:12:50.501760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.501775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.056 [2024-11-19 03:12:50.501789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.501804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.056 [2024-11-19 03:12:50.501824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.501839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.056 [2024-11-19 03:12:50.501858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.501874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.056 [2024-11-19 03:12:50.501888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.501918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.056 [2024-11-19 03:12:50.501936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79104 len:8 PRP1 0x0 PRP2 0x0 00:31:55.056 [2024-11-19 03:12:50.501950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.501969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.056 [2024-11-19 03:12:50.501990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.056 [2024-11-19 03:12:50.502016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79112 len:8 PRP1 0x0 PRP2 0x0 00:31:55.056 [2024-11-19 03:12:50.502029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.502054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.056 [2024-11-19 03:12:50.502064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.056 [2024-11-19 03:12:50.502075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79120 len:8 PRP1 0x0 PRP2 0x0 00:31:55.056 [2024-11-19 03:12:50.502088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.502101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.056 [2024-11-19 03:12:50.502111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.056 [2024-11-19 03:12:50.502122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:31:55.056 [2024-11-19 03:12:50.502134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.502147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.056 [2024-11-19 03:12:50.502158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.056 [2024-11-19 03:12:50.502168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:31:55.056 [2024-11-19 03:12:50.502184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.502198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.056 [2024-11-19 03:12:50.502208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.056 [2024-11-19 03:12:50.502219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:31:55.056 [2024-11-19 03:12:50.502231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.502244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.056 [2024-11-19 03:12:50.502255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.056 [2024-11-19 03:12:50.502265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:31:55.056 [2024-11-19 03:12:50.502278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.502296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.056 [2024-11-19 03:12:50.502307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.056 [2024-11-19 03:12:50.502319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78336 len:8 PRP1 0x0 PRP2 0x0 00:31:55.056 [2024-11-19 03:12:50.502332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.502345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.056 [2024-11-19 03:12:50.502355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.056 [2024-11-19 03:12:50.502365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78344 len:8 PRP1 0x0 PRP2 0x0 00:31:55.056 [2024-11-19 03:12:50.502378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.502390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.056 [2024-11-19 03:12:50.502400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.056 [2024-11-19 03:12:50.502411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:31:55.056 [2024-11-19 03:12:50.502424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.502436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.056 [2024-11-19 03:12:50.502446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.056 [2024-11-19 03:12:50.502457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:31:55.056 [2024-11-19 03:12:50.502469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.502542] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:55.056 [2024-11-19 03:12:50.502595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.056 [2024-11-19 03:12:50.502615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.502631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.056 [2024-11-19 03:12:50.502645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.502663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.056 [2024-11-19 03:12:50.502680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.502700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.056 [2024-11-19 03:12:50.502715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:50.502730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:31:55.056 [2024-11-19 03:12:50.502791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fb3b0 (9): Bad file descriptor 00:31:55.056 [2024-11-19 03:12:50.506062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:55.056 [2024-11-19 03:12:50.573803] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:31:55.056 8179.00 IOPS, 31.95 MiB/s [2024-11-19T02:13:05.671Z] 8333.00 IOPS, 32.55 MiB/s [2024-11-19T02:13:05.671Z] 8376.75 IOPS, 32.72 MiB/s [2024-11-19T02:13:05.671Z] [2024-11-19 03:12:54.338174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.056 [2024-11-19 03:12:54.338218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:54.338236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.056 [2024-11-19 03:12:54.338251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:54.338266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.056 [2024-11-19 03:12:54.338280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:54.338294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.056 [2024-11-19 03:12:54.338307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:54.338320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fb3b0 is same with the state(6) to be set 00:31:55.056 [2024-11-19 03:12:54.338390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.056 [2024-11-19 03:12:54.338412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:54.338435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.056 [2024-11-19 03:12:54.338451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:54.338469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.056 [2024-11-19 03:12:54.338502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:54.338518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.056 [2024-11-19 03:12:54.338532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:54.338573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.056 [2024-11-19 03:12:54.338588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.056 [2024-11-19 03:12:54.338602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.338616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.338631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.338644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.338658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.338697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.338717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.338732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.338748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.338764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.338780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.338795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.338813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.338829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.338846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.338861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.338877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.338892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.338907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.338921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.338936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.338951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.338985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.338999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.057 [2024-11-19 03:12:54.339602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.057 [2024-11-19 03:12:54.339615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.339629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.339642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.339657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.339670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.339684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.339706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.339722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.339736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.339755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.339769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.339784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.339797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.339812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.339825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.339840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.339853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.339868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.339882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.339896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.339910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.339924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.339937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.339951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.339964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.339979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.339992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.058 [2024-11-19 03:12:54.340738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.058 [2024-11-19 03:12:54.340769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.058 [2024-11-19 03:12:54.340784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.340798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.340813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.340828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.340843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.340857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.340875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.340890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.340905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.340919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.340934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.340947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.340962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.340976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.340991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.341975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.341991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.059 [2024-11-19 03:12:54.342019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.059 [2024-11-19 03:12:54.342035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:54.342053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:54.342069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:54.342083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:54.342099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:54.342112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:54.342127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:54.342141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:54.342156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:54.342169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:54.342184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:54.342198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:54.342214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:54.342228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:54.342243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:54.342257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:54.342272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:54.342286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:54.342301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:54.342314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:54.342350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.060 [2024-11-19 03:12:54.342366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.060 [2024-11-19 03:12:54.342378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96296 len:8 PRP1 0x0 PRP2 0x0 00:31:55.060 [2024-11-19 03:12:54.342391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:54.342455] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:55.060 [2024-11-19 03:12:54.342474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:31:55.060 [2024-11-19 03:12:54.345763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:31:55.060 [2024-11-19 03:12:54.345804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fb3b0 (9): Bad file descriptor 00:31:55.060 8363.60 IOPS, 32.67 MiB/s [2024-11-19T02:13:05.675Z] [2024-11-19 03:12:54.462083] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:31:55.060 8246.33 IOPS, 32.21 MiB/s [2024-11-19T02:13:05.675Z] 8286.14 IOPS, 32.37 MiB/s [2024-11-19T02:13:05.675Z] 8320.50 IOPS, 32.50 MiB/s [2024-11-19T02:13:05.675Z] 8355.44 IOPS, 32.64 MiB/s [2024-11-19T02:13:05.675Z] [2024-11-19 03:12:58.946786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.060 [2024-11-19 03:12:58.946832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.946861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.946879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.946895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.946910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.946925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.946940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.946956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.946971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.946987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.060 [2024-11-19 03:12:58.947545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.060 [2024-11-19 03:12:58.947563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.060 [2024-11-19 03:12:58.947578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.947594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.947609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.947624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.947637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.947652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.947665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.947680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.947717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.947735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.947750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.947766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.947781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.947796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.947810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.947826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.947841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.947857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.947871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.947887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.947900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.947916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.947930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.947946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.947960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.947980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.947994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.061 [2024-11-19 03:12:58.948631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.061 [2024-11-19 03:12:58.948645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.948659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.948673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.948695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.948727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.948748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.948763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.948778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.948792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.948808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.948822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.948838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.948852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.948867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.948881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.948897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.948911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.948926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.948940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.948955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.948970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.948985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.062 [2024-11-19 03:12:58.949885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.062 [2024-11-19 03:12:58.949917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.062 [2024-11-19 03:12:58.949934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47424 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.949952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.949974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.949986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47432 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47440 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47448 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47456 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47464 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47472 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47480 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47488 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47496 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47504 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47512 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47520 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47528 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47536 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47544 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47552 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47560 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47568 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47576 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.950957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.950967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.950979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47584 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.950992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.951021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.951031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.951042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47592 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.951054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.951067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.951077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.951087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47600 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.951104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.063 [2024-11-19 03:12:58.951118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.063 [2024-11-19 03:12:58.951129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.063 [2024-11-19 03:12:58.951139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47608 len:8 PRP1 0x0 PRP2 0x0 00:31:55.063 [2024-11-19 03:12:58.951152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.064 [2024-11-19 03:12:58.951167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.064 [2024-11-19 03:12:58.951178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.064 [2024-11-19 03:12:58.951190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47616 len:8 PRP1 0x0 PRP2 0x0 00:31:55.064 [2024-11-19 03:12:58.951202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.064 [2024-11-19 03:12:58.951215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.064 [2024-11-19 03:12:58.951225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.064 [2024-11-19 03:12:58.951236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47624 len:8 PRP1 0x0 PRP2 0x0 00:31:55.064 [2024-11-19 03:12:58.951248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.064 [2024-11-19 03:12:58.951261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.064 [2024-11-19 03:12:58.951272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.064 [2024-11-19 03:12:58.966345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47632 len:8 PRP1 0x0 PRP2 0x0 00:31:55.064 [2024-11-19 03:12:58.966375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.064 [2024-11-19 03:12:58.966392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.064 [2024-11-19 03:12:58.966405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.064 [2024-11-19 03:12:58.966415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47640 len:8 PRP1 0x0 PRP2 0x0 00:31:55.064 [2024-11-19 03:12:58.966428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.064 [2024-11-19 03:12:58.966441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:55.064 [2024-11-19 03:12:58.966452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:55.064 [2024-11-19 03:12:58.966462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47648 len:8 PRP1 0x0 PRP2 0x0 00:31:55.064 [2024-11-19 03:12:58.966474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.064 [2024-11-19 03:12:58.966545] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:55.064 [2024-11-19 03:12:58.966604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.064 [2024-11-19 03:12:58.966623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.064 [2024-11-19 03:12:58.966639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.064 [2024-11-19 03:12:58.966669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.064 [2024-11-19 03:12:58.966685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.064 [2024-11-19 03:12:58.966713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.064 [2024-11-19 03:12:58.966730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.064 [2024-11-19 03:12:58.966745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.064 [2024-11-19 03:12:58.966766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:31:55.064 [2024-11-19 03:12:58.966831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fb3b0 (9): Bad file descriptor 00:31:55.064 [2024-11-19 03:12:58.970091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:31:55.064 [2024-11-19 03:12:58.993558] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:31:55.064 8332.70 IOPS, 32.55 MiB/s [2024-11-19T02:13:05.679Z] 8359.09 IOPS, 32.65 MiB/s [2024-11-19T02:13:05.679Z] 8356.50 IOPS, 32.64 MiB/s [2024-11-19T02:13:05.679Z] 8367.85 IOPS, 32.69 MiB/s [2024-11-19T02:13:05.679Z] 8378.50 IOPS, 32.73 MiB/s [2024-11-19T02:13:05.679Z] 8385.80 IOPS, 32.76 MiB/s 00:31:55.064 Latency(us) 00:31:55.064 [2024-11-19T02:13:05.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.064 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:55.064 Verification LBA range: start 0x0 length 0x4000 00:31:55.064 NVMe0n1 : 15.01 8385.99 32.76 584.79 0.00 14240.73 534.00 27962.03 00:31:55.064 [2024-11-19T02:13:05.679Z] =================================================================================================================== 00:31:55.064 [2024-11-19T02:13:05.679Z] Total : 8385.99 32.76 584.79 0.00 14240.73 534.00 27962.03 00:31:55.064 Received shutdown signal, test time was about 15.000000 seconds 00:31:55.064 00:31:55.064 Latency(us) 00:31:55.064 [2024-11-19T02:13:05.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.064 [2024-11-19T02:13:05.679Z] =================================================================================================================== 00:31:55.064 [2024-11-19T02:13:05.679Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:55.064 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:55.064 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:55.064 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:55.064 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=363108 00:31:55.064 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:55.064 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 363108 /var/tmp/bdevperf.sock 00:31:55.064 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 363108 ']' 00:31:55.064 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:55.064 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:55.064 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:55.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:55.064 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:55.064 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:55.064 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:55.064 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:55.064 03:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:55.064 [2024-11-19 03:13:05.154504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:55.064 03:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:55.064 [2024-11-19 03:13:05.419287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:55.064 03:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:55.377 NVMe0n1 00:31:55.377 03:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:55.980 00:31:55.980 03:13:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:56.590 00:31:56.590 03:13:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:56.590 03:13:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:56.590 03:13:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:56.886 03:13:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:00.237 03:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:00.237 03:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:00.237 03:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=363796 00:32:00.237 03:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:00.237 03:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 363796 00:32:01.621 { 00:32:01.621 "results": [ 00:32:01.621 { 00:32:01.621 "job": "NVMe0n1", 00:32:01.621 "core_mask": "0x1", 00:32:01.621 "workload": "verify", 00:32:01.621 "status": "finished", 00:32:01.621 "verify_range": { 00:32:01.621 "start": 0, 00:32:01.621 "length": 16384 00:32:01.621 }, 00:32:01.621 "queue_depth": 128, 00:32:01.621 "io_size": 4096, 00:32:01.621 "runtime": 1.010487, 00:32:01.621 "iops": 8491.944973067442, 00:32:01.621 "mibps": 33.1716600510447, 00:32:01.621 "io_failed": 0, 00:32:01.621 "io_timeout": 0, 00:32:01.621 "avg_latency_us": 14987.675337848044, 00:32:01.621 "min_latency_us": 2682.1214814814816, 00:32:01.621 "max_latency_us": 16117.001481481482 00:32:01.621 } 00:32:01.621 ], 00:32:01.621 "core_count": 1 00:32:01.621 } 00:32:01.621 03:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:01.621 [2024-11-19 03:13:04.677248] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:32:01.621 [2024-11-19 03:13:04.677354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363108 ] 00:32:01.621 [2024-11-19 03:13:04.745765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.621 [2024-11-19 03:13:04.789273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.621 [2024-11-19 03:13:07.402973] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:01.621 [2024-11-19 03:13:07.403115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:01.621 [2024-11-19 03:13:07.403139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:01.621 [2024-11-19 03:13:07.403173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:01.621 [2024-11-19 03:13:07.403187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:01.621 [2024-11-19 03:13:07.403203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:01.621 [2024-11-19 03:13:07.403216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:01.621 [2024-11-19 03:13:07.403231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:01.621 [2024-11-19 03:13:07.403256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:01.621 [2024-11-19 03:13:07.403278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:01.621 [2024-11-19 03:13:07.403335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:01.621 [2024-11-19 03:13:07.403373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4e3b0 (9): Bad file descriptor 00:32:01.621 [2024-11-19 03:13:07.409315] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:01.621 Running I/O for 1 seconds... 00:32:01.621 8441.00 IOPS, 32.97 MiB/s 00:32:01.621 Latency(us) 00:32:01.621 [2024-11-19T02:13:12.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.621 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:01.621 Verification LBA range: start 0x0 length 0x4000 00:32:01.621 NVMe0n1 : 1.01 8491.94 33.17 0.00 0.00 14987.68 2682.12 16117.00 00:32:01.621 [2024-11-19T02:13:12.236Z] =================================================================================================================== 00:32:01.621 [2024-11-19T02:13:12.236Z] Total : 8491.94 33.17 0.00 0.00 14987.68 2682.12 16117.00 00:32:01.621 03:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:01.621 03:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:01.621 03:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:01.880 03:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:01.880 03:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:02.139 03:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:02.397 03:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:05.684 03:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:05.684 03:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:05.684 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 363108 00:32:05.684 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 363108 ']' 00:32:05.684 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 363108 00:32:05.684 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:05.684 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:05.684 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 363108 00:32:05.684 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:05.684 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:05.684 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 363108' 00:32:05.684 killing process with pid 363108 00:32:05.684 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 363108 00:32:05.684 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 363108 00:32:05.943 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:05.944 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:06.202 rmmod nvme_tcp 00:32:06.202 rmmod nvme_fabrics 00:32:06.202 rmmod nvme_keyring 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 360867 ']' 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 360867 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 360867 ']' 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 360867 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 360867 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 360867' 00:32:06.202 killing process with pid 360867 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 360867 00:32:06.202 03:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 360867 00:32:06.461 03:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:06.461 03:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:06.461 03:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:06.461 03:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:06.461 03:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:06.461 03:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:06.461 03:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:06.461 03:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:06.461 03:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:06.461 03:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.461 03:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:06.461 03:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:09.003 00:32:09.003 real 0m35.643s 00:32:09.003 user 2m5.899s 00:32:09.003 sys 0m5.962s 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:09.003 ************************************ 00:32:09.003 END TEST nvmf_failover 00:32:09.003 ************************************ 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.003 ************************************ 00:32:09.003 START TEST nvmf_host_discovery 00:32:09.003 ************************************ 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:09.003 * Looking for test storage... 00:32:09.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:09.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.003 --rc genhtml_branch_coverage=1 00:32:09.003 --rc genhtml_function_coverage=1 00:32:09.003 --rc genhtml_legend=1 00:32:09.003 --rc geninfo_all_blocks=1 00:32:09.003 --rc geninfo_unexecuted_blocks=1 00:32:09.003 00:32:09.003 ' 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:09.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.003 --rc genhtml_branch_coverage=1 00:32:09.003 --rc genhtml_function_coverage=1 00:32:09.003 --rc genhtml_legend=1 00:32:09.003 --rc geninfo_all_blocks=1 00:32:09.003 --rc geninfo_unexecuted_blocks=1 00:32:09.003 00:32:09.003 ' 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:09.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.003 --rc genhtml_branch_coverage=1 00:32:09.003 --rc genhtml_function_coverage=1 00:32:09.003 --rc genhtml_legend=1 00:32:09.003 --rc geninfo_all_blocks=1 00:32:09.003 --rc geninfo_unexecuted_blocks=1 00:32:09.003 00:32:09.003 ' 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:09.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.003 --rc genhtml_branch_coverage=1 00:32:09.003 --rc genhtml_function_coverage=1 00:32:09.003 --rc genhtml_legend=1 00:32:09.003 --rc geninfo_all_blocks=1 00:32:09.003 --rc geninfo_unexecuted_blocks=1 00:32:09.003 00:32:09.003 ' 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:09.003 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:09.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:09.004 03:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:10.909 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:10.910 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:10.910 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:10.910 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:10.910 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:10.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:10.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:32:10.910 00:32:10.910 --- 10.0.0.2 ping statistics --- 00:32:10.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.910 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:10.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:10.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:32:10.910 00:32:10.910 --- 10.0.0.1 ping statistics --- 00:32:10.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.910 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=366523 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 366523 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 366523 ']' 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:10.910 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.169 [2024-11-19 03:13:21.573611] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:32:11.170 [2024-11-19 03:13:21.573707] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:11.170 [2024-11-19 03:13:21.645630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.170 [2024-11-19 03:13:21.687987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:11.170 [2024-11-19 03:13:21.688054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:11.170 [2024-11-19 03:13:21.688067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:11.170 [2024-11-19 03:13:21.688079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:11.170 [2024-11-19 03:13:21.688089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:11.170 [2024-11-19 03:13:21.688617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.428 [2024-11-19 03:13:21.881026] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.428 [2024-11-19 03:13:21.889242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.428 null0 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.428 null1 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=366547 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 366547 /tmp/host.sock 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 366547 ']' 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:11.428 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:11.428 03:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.428 [2024-11-19 03:13:21.960624] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:32:11.428 [2024-11-19 03:13:21.960724] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366547 ] 00:32:11.428 [2024-11-19 03:13:22.024631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.687 [2024-11-19 03:13:22.071562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.687 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.946 [2024-11-19 03:13:22.506817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:11.946 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.947 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:11.947 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:11.947 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.947 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:11.947 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.947 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.947 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:11.947 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:11.947 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:12.205 03:13:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:12.774 [2024-11-19 03:13:23.268367] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:12.774 [2024-11-19 03:13:23.268390] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:12.774 [2024-11-19 03:13:23.268418] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:12.774 [2024-11-19 03:13:23.354684] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:13.034 [2024-11-19 03:13:23.417390] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:13.034 [2024-11-19 03:13:23.418364] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb811b0:1 started. 00:32:13.034 [2024-11-19 03:13:23.420100] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:13.034 [2024-11-19 03:13:23.420121] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:13.034 [2024-11-19 03:13:23.426763] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb811b0 was disconnected and freed. delete nvme_qpair. 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:13.295 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:13.296 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.296 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.296 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:13.296 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.296 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:13.296 03:13:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:13.556 [2024-11-19 03:13:24.015624] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb6b320:1 started. 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.556 [2024-11-19 03:13:24.059281] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb6b320 was disconnected and freed. delete nvme_qpair. 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:13.556 03:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:14.495 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:14.495 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:14.495 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:14.495 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:14.495 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:14.495 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.495 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.495 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.754 [2024-11-19 03:13:25.138913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:14.754 [2024-11-19 03:13:25.140094] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:14.754 [2024-11-19 03:13:25.140137] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.754 [2024-11-19 03:13:25.226311] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:14.754 03:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:14.754 [2024-11-19 03:13:25.286266] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:14.754 [2024-11-19 03:13:25.286338] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:14.754 [2024-11-19 03:13:25.286355] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:14.754 [2024-11-19 03:13:25.286363] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:15.692 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:15.692 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:15.692 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:15.692 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:15.692 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:15.692 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.692 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.692 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:15.692 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:15.692 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:15.951 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.952 [2024-11-19 03:13:26.371424] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:15.952 [2024-11-19 03:13:26.371469] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:15.952 [2024-11-19 03:13:26.373225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:15.952 [2024-11-19 03:13:26.373258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.952 [2024-11-19 03:13:26.373288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:15.952 [2024-11-19 03:13:26.373303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.952 [2024-11-19 03:13:26.373327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:15.952 [2024-11-19 03:13:26.373356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.952 [2024-11-19 03:13:26.373370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:15.952 [2024-11-19 03:13:26.373393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.952 [2024-11-19 03:13:26.373407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb531f0 is same with the state(6) to be set 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:15.952 [2024-11-19 03:13:26.383214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb531f0 (9): Bad file descriptor 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.952 [2024-11-19 03:13:26.393257] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:15.952 [2024-11-19 03:13:26.393279] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:15.952 [2024-11-19 03:13:26.393289] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:15.952 [2024-11-19 03:13:26.393297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:15.952 [2024-11-19 03:13:26.393326] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:15.952 [2024-11-19 03:13:26.393468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.952 [2024-11-19 03:13:26.393497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb531f0 with addr=10.0.0.2, port=4420 00:32:15.952 [2024-11-19 03:13:26.393514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb531f0 is same with the state(6) to be set 00:32:15.952 [2024-11-19 03:13:26.393536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb531f0 (9): Bad file descriptor 00:32:15.952 [2024-11-19 03:13:26.393583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:15.952 [2024-11-19 03:13:26.393602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:15.952 [2024-11-19 03:13:26.393617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:15.952 [2024-11-19 03:13:26.393631] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:15.952 [2024-11-19 03:13:26.393642] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:15.952 [2024-11-19 03:13:26.393650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:15.952 [2024-11-19 03:13:26.403358] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:15.952 [2024-11-19 03:13:26.403377] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:15.952 [2024-11-19 03:13:26.403386] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:15.952 [2024-11-19 03:13:26.403393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:15.952 [2024-11-19 03:13:26.403415] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:15.952 [2024-11-19 03:13:26.403555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.952 [2024-11-19 03:13:26.403582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb531f0 with addr=10.0.0.2, port=4420 00:32:15.952 [2024-11-19 03:13:26.403598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb531f0 is same with the state(6) to be set 00:32:15.952 [2024-11-19 03:13:26.403620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb531f0 (9): Bad file descriptor 00:32:15.952 [2024-11-19 03:13:26.403654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:15.952 [2024-11-19 03:13:26.403687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:15.952 [2024-11-19 03:13:26.403712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:15.952 [2024-11-19 03:13:26.403725] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:15.952 [2024-11-19 03:13:26.403734] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:15.952 [2024-11-19 03:13:26.403741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:15.952 [2024-11-19 03:13:26.413448] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:15.952 [2024-11-19 03:13:26.413468] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:15.952 [2024-11-19 03:13:26.413476] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:15.952 [2024-11-19 03:13:26.413483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:15.952 [2024-11-19 03:13:26.413513] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:15.952 [2024-11-19 03:13:26.413642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.952 [2024-11-19 03:13:26.413669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb531f0 with addr=10.0.0.2, port=4420 00:32:15.952 [2024-11-19 03:13:26.413686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb531f0 is same with the state(6) to be set 00:32:15.952 [2024-11-19 03:13:26.413733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb531f0 (9): Bad file descriptor 00:32:15.952 [2024-11-19 03:13:26.413797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:15.952 [2024-11-19 03:13:26.413818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:15.952 [2024-11-19 03:13:26.413832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:15.952 [2024-11-19 03:13:26.413845] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:15.952 [2024-11-19 03:13:26.413854] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:15.952 [2024-11-19 03:13:26.413862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.952 [2024-11-19 03:13:26.423547] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:15.952 [2024-11-19 03:13:26.423569] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:15.952 [2024-11-19 03:13:26.423578] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:15.952 [2024-11-19 03:13:26.423585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:15.952 [2024-11-19 03:13:26.423609] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.952 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:15.952 [2024-11-19 03:13:26.423800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.952 [2024-11-19 03:13:26.423830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb531f0 with addr=10.0.0.2, port=4420 00:32:15.953 [2024-11-19 03:13:26.423849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb531f0 is same with the state(6) to be set 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.953 [2024-11-19 03:13:26.423873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb531f0 (9): Bad file descriptor 00:32:15.953 [2024-11-19 03:13:26.423927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:15.953 [2024-11-19 03:13:26.423953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:15.953 [2024-11-19 03:13:26.423968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:15.953 [2024-11-19 03:13:26.423981] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:15.953 [2024-11-19 03:13:26.423990] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:15.953 [2024-11-19 03:13:26.423998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:15.953 [2024-11-19 03:13:26.433653] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:15.953 [2024-11-19 03:13:26.433677] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:15.953 [2024-11-19 03:13:26.433687] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:15.953 [2024-11-19 03:13:26.433702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:15.953 [2024-11-19 03:13:26.433728] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:15.953 [2024-11-19 03:13:26.433959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.953 [2024-11-19 03:13:26.433988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb531f0 with addr=10.0.0.2, port=4420 00:32:15.953 [2024-11-19 03:13:26.434005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb531f0 is same with the state(6) to be set 00:32:15.953 [2024-11-19 03:13:26.434028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb531f0 (9): Bad file descriptor 00:32:15.953 [2024-11-19 03:13:26.434049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:15.953 [2024-11-19 03:13:26.434063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:15.953 [2024-11-19 03:13:26.434077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:15.953 [2024-11-19 03:13:26.434090] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:15.953 [2024-11-19 03:13:26.434099] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:15.953 [2024-11-19 03:13:26.434106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:15.953 [2024-11-19 03:13:26.443762] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:15.953 [2024-11-19 03:13:26.443783] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:15.953 [2024-11-19 03:13:26.443792] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:15.953 [2024-11-19 03:13:26.443799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:15.953 [2024-11-19 03:13:26.443823] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:15.953 [2024-11-19 03:13:26.443972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.953 [2024-11-19 03:13:26.444001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb531f0 with addr=10.0.0.2, port=4420 00:32:15.953 [2024-11-19 03:13:26.444024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb531f0 is same with the state(6) to be set 00:32:15.953 [2024-11-19 03:13:26.444047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb531f0 (9): Bad file descriptor 00:32:15.953 [2024-11-19 03:13:26.444080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:15.953 [2024-11-19 03:13:26.444098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:15.953 [2024-11-19 03:13:26.444113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:15.953 [2024-11-19 03:13:26.444125] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:15.953 [2024-11-19 03:13:26.444135] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:15.953 [2024-11-19 03:13:26.444142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.953 [2024-11-19 03:13:26.453856] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:15.953 [2024-11-19 03:13:26.453877] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:15.953 [2024-11-19 03:13:26.453886] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:15.953 [2024-11-19 03:13:26.453894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:15.953 [2024-11-19 03:13:26.453917] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:15.953 [2024-11-19 03:13:26.454078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.953 [2024-11-19 03:13:26.454119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb531f0 with addr=10.0.0.2, port=4420 00:32:15.953 [2024-11-19 03:13:26.454136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb531f0 is same with the state(6) to be set 00:32:15.953 [2024-11-19 03:13:26.454157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb531f0 (9): Bad file descriptor 00:32:15.953 [2024-11-19 03:13:26.454178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:15.953 [2024-11-19 03:13:26.454191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:15.953 [2024-11-19 03:13:26.454205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:15.953 [2024-11-19 03:13:26.454216] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:15.953 [2024-11-19 03:13:26.454225] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:15.953 [2024-11-19 03:13:26.454232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:15.953 [2024-11-19 03:13:26.458737] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:15.953 [2024-11-19 03:13:26.458765] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:15.953 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:15.954 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.954 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:16.212 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.213 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.213 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:16.213 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:16.213 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:16.213 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:16.213 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:16.213 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.213 03:13:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.149 [2024-11-19 03:13:27.740337] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:17.149 [2024-11-19 03:13:27.740360] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:17.149 [2024-11-19 03:13:27.740378] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:17.408 [2024-11-19 03:13:27.867812] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:17.408 [2024-11-19 03:13:27.932454] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:32:17.408 [2024-11-19 03:13:27.933241] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xb7be30:1 started. 00:32:17.408 [2024-11-19 03:13:27.935430] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:17.408 [2024-11-19 03:13:27.935468] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:17.408 [2024-11-19 03:13:27.938776] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xb7be30 was disconnected and freed. delete nvme_qpair. 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.408 request: 00:32:17.408 { 00:32:17.408 "name": "nvme", 00:32:17.408 "trtype": "tcp", 00:32:17.408 "traddr": "10.0.0.2", 00:32:17.408 "adrfam": "ipv4", 00:32:17.408 "trsvcid": "8009", 00:32:17.408 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:17.408 "wait_for_attach": true, 00:32:17.408 "method": "bdev_nvme_start_discovery", 00:32:17.408 "req_id": 1 00:32:17.408 } 00:32:17.408 Got JSON-RPC error response 00:32:17.408 response: 00:32:17.408 { 00:32:17.408 "code": -17, 00:32:17.408 "message": "File exists" 00:32:17.408 } 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.408 03:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:17.408 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.668 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:17.668 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:17.668 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:17.668 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:17.668 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:17.668 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:17.668 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:17.668 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:17.668 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:17.668 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.668 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.668 request: 00:32:17.668 { 00:32:17.668 "name": "nvme_second", 00:32:17.668 "trtype": "tcp", 00:32:17.668 "traddr": "10.0.0.2", 00:32:17.668 "adrfam": "ipv4", 00:32:17.668 "trsvcid": "8009", 00:32:17.668 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:17.668 "wait_for_attach": true, 00:32:17.669 "method": "bdev_nvme_start_discovery", 00:32:17.669 "req_id": 1 00:32:17.669 } 00:32:17.669 Got JSON-RPC error response 00:32:17.669 response: 00:32:17.669 { 00:32:17.669 "code": -17, 00:32:17.669 "message": "File exists" 00:32:17.669 } 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.669 03:13:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.610 [2024-11-19 03:13:29.147309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.610 [2024-11-19 03:13:29.147399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7ded0 with addr=10.0.0.2, port=8010 00:32:18.610 [2024-11-19 03:13:29.147434] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:18.610 [2024-11-19 03:13:29.147450] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:18.610 [2024-11-19 03:13:29.147471] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:19.549 [2024-11-19 03:13:30.149733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.549 [2024-11-19 03:13:30.149815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7ded0 with addr=10.0.0.2, port=8010 00:32:19.549 [2024-11-19 03:13:30.149849] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:19.549 [2024-11-19 03:13:30.149864] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:19.549 [2024-11-19 03:13:30.149878] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:20.926 [2024-11-19 03:13:31.151937] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:20.926 request: 00:32:20.926 { 00:32:20.926 "name": "nvme_second", 00:32:20.926 "trtype": "tcp", 00:32:20.926 "traddr": "10.0.0.2", 00:32:20.926 "adrfam": "ipv4", 00:32:20.926 "trsvcid": "8010", 00:32:20.926 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:20.926 "wait_for_attach": false, 00:32:20.926 "attach_timeout_ms": 3000, 00:32:20.926 "method": "bdev_nvme_start_discovery", 00:32:20.926 "req_id": 1 00:32:20.926 } 00:32:20.926 Got JSON-RPC error response 00:32:20.926 response: 00:32:20.926 { 00:32:20.926 "code": -110, 00:32:20.926 "message": "Connection timed out" 00:32:20.926 } 00:32:20.926 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:20.926 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:20.926 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:20.926 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:20.926 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:20.926 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:20.926 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:20.926 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:20.926 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.926 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.926 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:20.926 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:20.926 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.926 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:20.926 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:20.926 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 366547 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:20.927 rmmod nvme_tcp 00:32:20.927 rmmod nvme_fabrics 00:32:20.927 rmmod nvme_keyring 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 366523 ']' 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 366523 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 366523 ']' 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 366523 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 366523 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 366523' 00:32:20.927 killing process with pid 366523 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 366523 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 366523 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.927 03:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:23.466 00:32:23.466 real 0m14.405s 00:32:23.466 user 0m21.324s 00:32:23.466 sys 0m2.931s 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.466 ************************************ 00:32:23.466 END TEST nvmf_host_discovery 00:32:23.466 ************************************ 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.466 ************************************ 00:32:23.466 START TEST nvmf_host_multipath_status 00:32:23.466 ************************************ 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:23.466 * Looking for test storage... 00:32:23.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:23.466 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:23.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.467 --rc genhtml_branch_coverage=1 00:32:23.467 --rc genhtml_function_coverage=1 00:32:23.467 --rc genhtml_legend=1 00:32:23.467 --rc geninfo_all_blocks=1 00:32:23.467 --rc geninfo_unexecuted_blocks=1 00:32:23.467 00:32:23.467 ' 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:23.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.467 --rc genhtml_branch_coverage=1 00:32:23.467 --rc genhtml_function_coverage=1 00:32:23.467 --rc genhtml_legend=1 00:32:23.467 --rc geninfo_all_blocks=1 00:32:23.467 --rc geninfo_unexecuted_blocks=1 00:32:23.467 00:32:23.467 ' 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:23.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.467 --rc genhtml_branch_coverage=1 00:32:23.467 --rc genhtml_function_coverage=1 00:32:23.467 --rc genhtml_legend=1 00:32:23.467 --rc geninfo_all_blocks=1 00:32:23.467 --rc geninfo_unexecuted_blocks=1 00:32:23.467 00:32:23.467 ' 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:23.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.467 --rc genhtml_branch_coverage=1 00:32:23.467 --rc genhtml_function_coverage=1 00:32:23.467 --rc genhtml_legend=1 00:32:23.467 --rc geninfo_all_blocks=1 00:32:23.467 --rc geninfo_unexecuted_blocks=1 00:32:23.467 00:32:23.467 ' 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:23.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:23.467 03:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:25.375 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:25.375 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:25.375 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:25.375 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:25.375 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:25.375 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:25.375 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:25.375 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:25.375 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:25.375 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:25.376 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:25.376 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:25.376 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:25.376 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:25.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:25.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:32:25.376 00:32:25.376 --- 10.0.0.2 ping statistics --- 00:32:25.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.376 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:32:25.376 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:25.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:25.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:32:25.376 00:32:25.376 --- 10.0.0.1 ping statistics --- 00:32:25.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.377 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=369829 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 369829 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 369829 ']' 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:25.377 03:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:25.637 [2024-11-19 03:13:36.037798] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:32:25.637 [2024-11-19 03:13:36.037875] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:25.637 [2024-11-19 03:13:36.108499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:25.637 [2024-11-19 03:13:36.153077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:25.637 [2024-11-19 03:13:36.153133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:25.637 [2024-11-19 03:13:36.153146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:25.637 [2024-11-19 03:13:36.153158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:25.637 [2024-11-19 03:13:36.153172] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:25.637 [2024-11-19 03:13:36.154549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.637 [2024-11-19 03:13:36.154554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.896 03:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:25.896 03:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:25.896 03:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:25.896 03:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:25.896 03:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:25.896 03:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:25.896 03:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=369829 00:32:25.896 03:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:26.154 [2024-11-19 03:13:36.576723] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:26.154 03:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:26.413 Malloc0 00:32:26.413 03:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:26.671 03:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:26.930 03:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:27.188 [2024-11-19 03:13:37.690607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:27.189 03:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:27.447 [2024-11-19 03:13:37.951316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:27.447 03:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=369999 00:32:27.447 03:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:27.447 03:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:27.447 03:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 369999 /var/tmp/bdevperf.sock 00:32:27.447 03:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 369999 ']' 00:32:27.447 03:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:27.447 03:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:27.447 03:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:27.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:27.447 03:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:27.447 03:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:27.706 03:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:27.706 03:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:27.706 03:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:27.965 03:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:28.534 Nvme0n1 00:32:28.534 03:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:29.100 Nvme0n1 00:32:29.100 03:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:29.100 03:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:31.635 03:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:31.635 03:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:31.635 03:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:31.635 03:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:33.015 03:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:33.015 03:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:33.015 03:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.015 03:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:33.015 03:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.015 03:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:33.015 03:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.015 03:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:33.273 03:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:33.274 03:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:33.274 03:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.274 03:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:33.532 03:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.532 03:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:33.532 03:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.532 03:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:33.791 03:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.791 03:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:33.791 03:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.791 03:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:34.050 03:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:34.050 03:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:34.050 03:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.050 03:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:34.308 03:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:34.308 03:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:34.308 03:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:34.567 03:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:34.826 03:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:36.205 03:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:36.205 03:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:36.205 03:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.205 03:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:36.205 03:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:36.205 03:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:36.205 03:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.205 03:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:36.463 03:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.464 03:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:36.464 03:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.464 03:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:36.722 03:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.722 03:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:36.722 03:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.722 03:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:36.980 03:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.980 03:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:36.980 03:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.980 03:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:37.239 03:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.239 03:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:37.239 03:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.239 03:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:37.497 03:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.497 03:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:37.497 03:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:37.756 03:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:38.015 03:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:39.396 03:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:39.396 03:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:39.396 03:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.396 03:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:39.396 03:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:39.396 03:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:39.396 03:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.396 03:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:39.655 03:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:39.655 03:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:39.655 03:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.655 03:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:39.912 03:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:39.912 03:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:39.912 03:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.912 03:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:40.170 03:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.171 03:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:40.171 03:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.171 03:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:40.429 03:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.429 03:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:40.429 03:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.429 03:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:40.687 03:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.687 03:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:40.687 03:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:40.945 03:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:41.511 03:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:42.447 03:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:42.447 03:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:42.447 03:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.447 03:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:42.708 03:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:42.708 03:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:42.708 03:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.708 03:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:42.966 03:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:42.966 03:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:42.966 03:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.966 03:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:43.224 03:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.224 03:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:43.224 03:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.224 03:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:43.483 03:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.483 03:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:43.483 03:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.483 03:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:43.741 03:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.741 03:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:43.741 03:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.741 03:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:44.000 03:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:44.000 03:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:44.000 03:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:44.258 03:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:44.517 03:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:45.457 03:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:45.457 03:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:45.457 03:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.457 03:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:46.024 03:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:46.024 03:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:46.024 03:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.024 03:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:46.024 03:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:46.024 03:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:46.024 03:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.024 03:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:46.282 03:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.282 03:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:46.282 03:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.282 03:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:46.540 03:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.540 03:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:46.541 03:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.541 03:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:47.107 03:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:47.107 03:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:47.107 03:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.107 03:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:47.107 03:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:47.107 03:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:47.107 03:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:47.365 03:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:47.624 03:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:48.998 03:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:48.998 03:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:48.998 03:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.998 03:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:48.998 03:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:48.998 03:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:48.998 03:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.998 03:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:49.257 03:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.257 03:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:49.257 03:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.257 03:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:49.515 03:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.515 03:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:49.515 03:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.515 03:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:49.774 03:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.774 03:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:49.774 03:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.774 03:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:50.343 03:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:50.343 03:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:50.343 03:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.343 03:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:50.343 03:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:50.343 03:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:50.603 03:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:50.603 03:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:51.173 03:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:51.173 03:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:52.554 03:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:52.554 03:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:52.554 03:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.554 03:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:52.554 03:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.554 03:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:52.554 03:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.554 03:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:52.813 03:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.813 03:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:52.813 03:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.813 03:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:53.071 03:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.071 03:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:53.071 03:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.071 03:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:53.330 03:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.330 03:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:53.330 03:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.330 03:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:53.589 03:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.589 03:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:53.589 03:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.589 03:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:53.847 03:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.847 03:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:53.847 03:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:54.105 03:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:54.363 03:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:55.740 03:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:55.740 03:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:55.740 03:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.740 03:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:55.740 03:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:55.741 03:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:55.741 03:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.741 03:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:55.997 03:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.997 03:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:55.997 03:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.997 03:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:56.254 03:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.254 03:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:56.254 03:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.254 03:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:56.512 03:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.512 03:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:56.512 03:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.512 03:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:57.078 03:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.078 03:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:57.078 03:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.078 03:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:57.078 03:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.078 03:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:57.078 03:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:57.644 03:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:57.644 03:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:59.025 03:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:59.025 03:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:59.025 03:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.025 03:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:59.025 03:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.025 03:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:59.025 03:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.025 03:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:59.283 03:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.283 03:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:59.283 03:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.283 03:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:59.541 03:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.541 03:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:59.541 03:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.541 03:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:59.800 03:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.800 03:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:59.800 03:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.800 03:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:00.058 03:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.058 03:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:00.058 03:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.058 03:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:00.317 03:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.317 03:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:00.317 03:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:00.576 03:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:01.145 03:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:02.081 03:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:02.081 03:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:02.081 03:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.081 03:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:02.339 03:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.339 03:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:02.339 03:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.339 03:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:02.598 03:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:02.598 03:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:02.598 03:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.598 03:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:02.856 03:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.856 03:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:02.856 03:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.856 03:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:03.114 03:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.114 03:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:03.114 03:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.114 03:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:03.374 03:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.374 03:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:03.374 03:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.374 03:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:03.632 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:03.632 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 369999 00:33:03.632 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 369999 ']' 00:33:03.632 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 369999 00:33:03.632 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:03.632 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:03.632 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 369999 00:33:03.632 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:03.632 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:03.632 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 369999' 00:33:03.632 killing process with pid 369999 00:33:03.632 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 369999 00:33:03.632 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 369999 00:33:03.632 { 00:33:03.632 "results": [ 00:33:03.632 { 00:33:03.632 "job": "Nvme0n1", 00:33:03.632 "core_mask": "0x4", 00:33:03.632 "workload": "verify", 00:33:03.632 "status": "terminated", 00:33:03.632 "verify_range": { 00:33:03.632 "start": 0, 00:33:03.632 "length": 16384 00:33:03.632 }, 00:33:03.632 "queue_depth": 128, 00:33:03.632 "io_size": 4096, 00:33:03.632 "runtime": 34.344438, 00:33:03.632 "iops": 7978.4971295788855, 00:33:03.632 "mibps": 31.16600441241752, 00:33:03.632 "io_failed": 0, 00:33:03.632 "io_timeout": 0, 00:33:03.632 "avg_latency_us": 16013.099868542893, 00:33:03.632 "min_latency_us": 371.6740740740741, 00:33:03.632 "max_latency_us": 4101097.2444444443 00:33:03.632 } 00:33:03.632 ], 00:33:03.632 "core_count": 1 00:33:03.632 } 00:33:03.913 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 369999 00:33:03.913 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:03.913 [2024-11-19 03:13:38.012045] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:33:03.913 [2024-11-19 03:13:38.012139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid369999 ] 00:33:03.913 [2024-11-19 03:13:38.079894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.913 [2024-11-19 03:13:38.127566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:03.913 Running I/O for 90 seconds... 00:33:03.913 8391.00 IOPS, 32.78 MiB/s [2024-11-19T02:14:14.528Z] 8458.00 IOPS, 33.04 MiB/s [2024-11-19T02:14:14.528Z] 8465.33 IOPS, 33.07 MiB/s [2024-11-19T02:14:14.528Z] 8478.00 IOPS, 33.12 MiB/s [2024-11-19T02:14:14.528Z] 8483.80 IOPS, 33.14 MiB/s [2024-11-19T02:14:14.528Z] 8490.17 IOPS, 33.16 MiB/s [2024-11-19T02:14:14.528Z] 8494.14 IOPS, 33.18 MiB/s [2024-11-19T02:14:14.528Z] 8503.62 IOPS, 33.22 MiB/s [2024-11-19T02:14:14.528Z] 8499.33 IOPS, 33.20 MiB/s [2024-11-19T02:14:14.528Z] 8504.00 IOPS, 33.22 MiB/s [2024-11-19T02:14:14.528Z] 8500.09 IOPS, 33.20 MiB/s [2024-11-19T02:14:14.528Z] 8510.75 IOPS, 33.25 MiB/s [2024-11-19T02:14:14.528Z] 8517.62 IOPS, 33.27 MiB/s [2024-11-19T02:14:14.528Z] 8509.43 IOPS, 33.24 MiB/s [2024-11-19T02:14:14.528Z] [2024-11-19 03:13:54.761586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.913 [2024-11-19 03:13:54.761652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:03.913 [2024-11-19 03:13:54.761706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.913 [2024-11-19 03:13:54.761736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:03.913 [2024-11-19 03:13:54.761760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.913 [2024-11-19 03:13:54.761778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:03.913 [2024-11-19 03:13:54.761801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.913 [2024-11-19 03:13:54.761819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:03.913 [2024-11-19 03:13:54.761842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.913 [2024-11-19 03:13:54.761859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:03.913 [2024-11-19 03:13:54.761882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.913 [2024-11-19 03:13:54.761900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:03.913 [2024-11-19 03:13:54.761923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.913 [2024-11-19 03:13:54.761941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:03.913 [2024-11-19 03:13:54.761964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.913 [2024-11-19 03:13:54.761980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:03.913 [2024-11-19 03:13:54.762004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.913 [2024-11-19 03:13:54.762029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:03.913 [2024-11-19 03:13:54.762073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.913 [2024-11-19 03:13:54.762092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:03.913 [2024-11-19 03:13:54.762115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.913 [2024-11-19 03:13:54.762131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:03.913 [2024-11-19 03:13:54.762154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.913 [2024-11-19 03:13:54.762170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.913 [2024-11-19 03:13:54.762207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.913 [2024-11-19 03:13:54.762224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.913 [2024-11-19 03:13:54.762260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.913 [2024-11-19 03:13:54.762278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:03.913 [2024-11-19 03:13:54.762300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.913 [2024-11-19 03:13:54.762317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:03.913 [2024-11-19 03:13:54.762339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.913 [2024-11-19 03:13:54.762356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:03.913 [2024-11-19 03:13:54.762378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.913 [2024-11-19 03:13:54.762395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:03.913 [2024-11-19 03:13:54.762418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.762434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.762457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.762473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.762495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.762512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.762534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.762551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.762573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.762602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.762626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.762642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.762666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.762701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.762726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.762743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.762766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.762782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.762805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.762822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.762845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.762861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.762884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.762901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.762924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.762941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.762965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.762981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.763005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.763041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.763732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.763771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.763798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.763822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.763846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.763863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.763886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.763902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.763924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.763941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.763964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.763980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:03.914 [2024-11-19 03:13:54.764739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.914 [2024-11-19 03:13:54.764756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.764778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.764794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.764817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.764833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.764860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.764878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.764900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.764917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.764940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.764956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.764979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.915 [2024-11-19 03:13:54.765094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.765976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.765998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.766015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.766670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.915 [2024-11-19 03:13:54.766700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.766730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.915 [2024-11-19 03:13:54.766749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.766772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.915 [2024-11-19 03:13:54.766789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.766811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.915 [2024-11-19 03:13:54.766827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.766849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.915 [2024-11-19 03:13:54.766866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.766888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.915 [2024-11-19 03:13:54.766904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.915 [2024-11-19 03:13:54.766927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.916 [2024-11-19 03:13:54.766949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.766973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.916 [2024-11-19 03:13:54.766994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.916 [2024-11-19 03:13:54.767034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.916 [2024-11-19 03:13:54.767089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.916 [2024-11-19 03:13:54.767127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.916 [2024-11-19 03:13:54.767164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.916 [2024-11-19 03:13:54.767203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.916 [2024-11-19 03:13:54.767241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.916 [2024-11-19 03:13:54.767279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.916 [2024-11-19 03:13:54.767316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.916 [2024-11-19 03:13:54.767353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.767391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.767428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.767466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.767509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.767547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.767590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.767628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.767666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.767728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.767768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.767807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.767846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.767884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.767923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.767962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.767988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.768005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.768028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.768059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.768082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.768112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.768135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.768151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.768173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.768195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.768218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.768234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.768257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.768278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.768302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.768318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.768340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.768356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.768379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.768395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.768417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.768433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.768456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.768472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:03.916 [2024-11-19 03:13:54.768495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.916 [2024-11-19 03:13:54.768514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.768538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.768554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.768576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.768592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.768630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.768646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.768668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.768684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.768729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.768746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.768769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.768785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.768808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.768824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.768847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.768869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.768891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.768908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.768931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.768953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.768976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.768992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.769030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.769050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.769072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.769088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.769109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.769124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.769146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.769162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.769183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.769199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.769220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.769236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.769257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.769273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.769294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.769310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.769332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.769348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.769369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.769385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.769406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.769422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.769444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.769459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.769480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.769501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.769527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.769544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.769566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.769587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.769610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.769626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.770438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.770462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.770489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.770507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.770530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.770547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.770570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.770586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:03.917 [2024-11-19 03:13:54.770608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.917 [2024-11-19 03:13:54.770624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.770646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.770662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.770685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.770709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.770733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.770750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.770772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.770788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.770816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.770833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.770856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.770872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.770895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.770911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.770933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.770950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.770973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.770989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.918 [2024-11-19 03:13:54.771780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.771964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.771995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.772018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.772034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.772056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.772071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.772093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.772108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.772130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.772146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.772168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.918 [2024-11-19 03:13:54.772183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:03.918 [2024-11-19 03:13:54.772211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.772227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.772249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.772266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.772288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.772307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.772330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.772352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.772374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.772390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.772411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.772427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.772448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.772464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.772485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.772501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.772522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.772538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.772560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.772592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.772615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.772631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.772654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.772670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.773353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.773378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.773406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.773424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.773448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.919 [2024-11-19 03:13:54.773465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.773493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.919 [2024-11-19 03:13:54.773510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.773535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.919 [2024-11-19 03:13:54.773552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.773574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.919 [2024-11-19 03:13:54.773590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.773613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.919 [2024-11-19 03:13:54.773629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.773652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.919 [2024-11-19 03:13:54.773670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.773717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.919 [2024-11-19 03:13:54.773736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.773774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.919 [2024-11-19 03:13:54.773792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.773816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.919 [2024-11-19 03:13:54.773832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.786321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.919 [2024-11-19 03:13:54.786353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.786377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.919 [2024-11-19 03:13:54.786393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.786415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.919 [2024-11-19 03:13:54.786430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.786452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.919 [2024-11-19 03:13:54.786467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.786495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.919 [2024-11-19 03:13:54.786511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.786532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.919 [2024-11-19 03:13:54.786548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.786569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.919 [2024-11-19 03:13:54.786584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.786606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.786621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.786643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.786658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.786707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.786726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.786767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.786783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.786806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.786823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.786845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.786862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.786884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.786901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.786924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.786940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:03.919 [2024-11-19 03:13:54.786962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.919 [2024-11-19 03:13:54.786993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.787960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.787983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.788022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.788038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.788059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.788074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.788095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.788110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.788131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.788147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.788168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.788184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.788205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.788220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.788241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.788256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.788277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.788293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.788314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.788329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.788352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.788368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.788389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.788404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.788425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.788441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.788466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.920 [2024-11-19 03:13:54.788481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:03.920 [2024-11-19 03:13:54.788502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.788518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.788539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.788554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.788575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.788590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.788610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.788625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.788646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.788661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.788706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.788750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.788776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.788794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.789715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.789741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.789771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.789789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.789813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.789829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.789852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.789868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.789897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.789915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.789937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.789954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.789977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.789994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.790951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.790990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.791006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.791029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.791060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:03.921 [2024-11-19 03:13:54.791082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.921 [2024-11-19 03:13:54.791097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.922 [2024-11-19 03:13:54.791177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.791977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.791994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.792666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.792700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.792732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.792751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.792774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.922 [2024-11-19 03:13:54.792791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.792814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.922 [2024-11-19 03:13:54.792831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.792855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.922 [2024-11-19 03:13:54.792872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.792895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.922 [2024-11-19 03:13:54.792911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.792934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.922 [2024-11-19 03:13:54.792951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.792973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.922 [2024-11-19 03:13:54.792989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.793013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.922 [2024-11-19 03:13:54.793030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.793068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.922 [2024-11-19 03:13:54.793083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.922 [2024-11-19 03:13:54.793110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.922 [2024-11-19 03:13:54.793126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.923 [2024-11-19 03:13:54.793162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.923 [2024-11-19 03:13:54.793198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.923 [2024-11-19 03:13:54.793234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.923 [2024-11-19 03:13:54.793271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.923 [2024-11-19 03:13:54.793308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.923 [2024-11-19 03:13:54.793343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.923 [2024-11-19 03:13:54.793380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.923 [2024-11-19 03:13:54.793416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.793452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.793488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.793524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.793565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.793602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.793639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.793699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.793756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.793796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.793834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.793873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.793911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.793950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.793972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.793988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.794011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.794042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.794065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.794085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.794123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.794139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.794176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.794193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.794216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.794232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.794254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.794270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.794292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.794309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.794331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.794348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.794371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.794388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.794410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.794426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.794449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.794466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.794488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.794503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.794525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.794542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.794564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.794580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.794607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.794624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.794646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.923 [2024-11-19 03:13:54.794678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:03.923 [2024-11-19 03:13:54.794708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.794741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.794765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.794782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.794805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.794821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.794844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.794860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.794882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.794899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.794922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.794938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.794975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.794992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.795015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.795046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.795068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.795083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.795104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.795119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.795144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.795160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.795181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.795196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.795218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.795233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.795254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.795269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.795290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.795306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.795327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.795343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.795364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.795379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.795401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.795416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.795437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.795452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.795473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.795488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.795510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.795526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.795547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.795562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.795584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.795603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.797992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.798018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.798046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.798064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.798088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.798104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.798127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.798144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.798166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.798182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.798205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.798221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.798244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.798277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.798300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.798330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.798352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.798368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.798389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.798404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.798425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.798440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.798461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.798481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.798519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.798536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.798558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.798589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.798612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.798629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:03.924 [2024-11-19 03:13:54.798651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.924 [2024-11-19 03:13:54.798668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.798699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.798718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.798741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.798759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.798781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.798798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.798820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.798836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.798866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.798884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.798906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.798923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.798945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.798976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.798998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.925 [2024-11-19 03:13:54.799422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.799955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.799997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.800014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.800037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.800068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.800090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.800105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.800126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.800141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.800178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.800195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.800218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.925 [2024-11-19 03:13:54.800234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:03.925 [2024-11-19 03:13:54.800257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.800273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:03.926 8501.67 IOPS, 33.21 MiB/s [2024-11-19T02:14:14.541Z] [2024-11-19 03:13:54.800997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.801022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.801067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.801108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.801147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.926 [2024-11-19 03:13:54.801186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.926 [2024-11-19 03:13:54.801231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.926 [2024-11-19 03:13:54.801270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.926 [2024-11-19 03:13:54.801310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.926 [2024-11-19 03:13:54.801364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.926 [2024-11-19 03:13:54.801400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.926 [2024-11-19 03:13:54.801437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.926 [2024-11-19 03:13:54.801473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.926 [2024-11-19 03:13:54.801509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.926 [2024-11-19 03:13:54.801545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.926 [2024-11-19 03:13:54.801582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.926 [2024-11-19 03:13:54.801618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.926 [2024-11-19 03:13:54.801655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.926 [2024-11-19 03:13:54.801723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.926 [2024-11-19 03:13:54.801781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.926 [2024-11-19 03:13:54.801819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.801859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.801896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.801934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.801957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.801988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.802010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.802027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.802064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.802080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.802116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.802132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.802155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.802171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.802193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.802210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.802232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.802252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.802276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.802293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.802315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.802332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.802354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.926 [2024-11-19 03:13:54.802370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:03.926 [2024-11-19 03:13:54.802392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.802409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.802450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.802466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.802503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.802519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.802543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.802559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.802581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.802597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.802620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.802637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.802660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.802676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.802706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.802725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.802748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.802764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.802791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.802808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.802831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.802847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.802869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.802886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.802908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.802923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.802946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.802962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.802984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:03.927 [2024-11-19 03:13:54.803917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.927 [2024-11-19 03:13:54.803934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.804714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.804750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.804777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.804795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.804819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.804836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.804859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.804876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.804898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.804915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.804937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.804954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.804992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.805983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.805999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.806021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.806051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.806078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.806093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.806115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.806131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.806152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.806168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.806189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.806204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.806225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.928 [2024-11-19 03:13:54.806240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.806261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.806276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.806297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.928 [2024-11-19 03:13:54.806312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:03.928 [2024-11-19 03:13:54.806333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.806348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.806369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.806384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.806406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.806421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.806442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.806457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.806479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.806495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.806521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.806537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.806557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.806574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.806595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.806610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.806631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.806646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.806667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.806683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.806730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.806747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.806768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.806785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.806806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.806822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.806844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.806859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.806881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.806912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.806936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.806952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.806974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.806990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.807654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.807682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.807719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.807738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.807762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.807778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.807801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.807818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.807841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.807857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.807880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.929 [2024-11-19 03:13:54.807896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.807920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.929 [2024-11-19 03:13:54.807936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.807959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.929 [2024-11-19 03:13:54.807975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.807997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.929 [2024-11-19 03:13:54.808014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.808052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.929 [2024-11-19 03:13:54.808067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.808089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.929 [2024-11-19 03:13:54.808104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.808125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.929 [2024-11-19 03:13:54.808141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.808162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.929 [2024-11-19 03:13:54.808182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.808204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.929 [2024-11-19 03:13:54.808220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.808241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.929 [2024-11-19 03:13:54.808256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.808277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.929 [2024-11-19 03:13:54.808292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.808314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.929 [2024-11-19 03:13:54.808329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.808350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.929 [2024-11-19 03:13:54.808365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.808386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.929 [2024-11-19 03:13:54.808402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.808424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.929 [2024-11-19 03:13:54.808439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.808460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.929 [2024-11-19 03:13:54.808475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:03.929 [2024-11-19 03:13:54.808498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.929 [2024-11-19 03:13:54.808514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.808535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.808550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.808572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.808587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.808608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.808623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.808649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.808665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.808707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.808726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.808752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.808768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.808791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.808808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.808831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.808847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.808869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.808886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.808909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.808925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.808948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.808963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.808986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.809965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.809992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.810008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.810046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.810061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.810082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.930 [2024-11-19 03:13:54.810097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:03.930 [2024-11-19 03:13:54.810117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.810132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.810153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.810172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.810194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.810210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.810231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.810246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.810268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.810283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.810303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.810318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.810339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.810354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.810380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.810395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.810416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.810432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.810453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.810468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.810489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.810504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.810525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.810541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.811352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.811390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.811419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.811442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.811466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.811483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.811506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.811522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.811544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.811560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.811583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.811599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.811621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.811638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.811660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.811676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.811707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.811725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.811748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.811765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.811788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.811805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.811828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.811844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.811867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.811884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.811906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.811923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.811950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.811968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.811990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.812006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.812028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.812044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.812068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.812084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.812106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.812122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.812145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.812176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.812199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.812214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.812251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.931 [2024-11-19 03:13:54.812266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:03.931 [2024-11-19 03:13:54.812287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.812303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.812323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.812338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.812359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.812374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.812395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.812411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.812436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.812452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.812473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.812489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.812510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.812525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.812545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.812560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.812581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.812597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.812618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.812632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.812653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.812669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.812699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.812717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.812739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.812754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.812774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.812790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.812811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.932 [2024-11-19 03:13:54.812827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.819457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.819487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.819517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.819535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.819556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.819572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.819593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.819608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.819629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.819645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.819665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.819706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.819730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.819763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.819787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.819803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.819826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.819842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.819864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.819881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.819904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.819920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.819942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.819958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.819981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.820013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.820035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.820070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.820094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.820109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.820131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.820147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.820168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.820184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.820206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.820221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.820872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.820896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.820924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.820943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.820966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.820983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.821005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.821022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.821044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.821060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:03.932 [2024-11-19 03:13:54.821083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.932 [2024-11-19 03:13:54.821100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.933 [2024-11-19 03:13:54.821139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.933 [2024-11-19 03:13:54.821183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.933 [2024-11-19 03:13:54.821239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.933 [2024-11-19 03:13:54.821276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.933 [2024-11-19 03:13:54.821313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.933 [2024-11-19 03:13:54.821350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.933 [2024-11-19 03:13:54.821386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.933 [2024-11-19 03:13:54.821422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.933 [2024-11-19 03:13:54.821459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.933 [2024-11-19 03:13:54.821496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.933 [2024-11-19 03:13:54.821532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.933 [2024-11-19 03:13:54.821568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.933 [2024-11-19 03:13:54.821604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.933 [2024-11-19 03:13:54.821641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.933 [2024-11-19 03:13:54.821711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.933 [2024-11-19 03:13:54.821752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.821790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.821827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.821864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.821903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.821957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.821980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.821997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.822019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.822036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.822058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.822074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.822096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.822113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.822135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.822151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.822179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.822196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.822219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.822255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.822278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.822294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.822331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.822348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.822371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.822387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.822409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.822425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.822447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.822464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.822487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.822503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.822525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.822541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.822564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.822580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.822602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.822619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.822641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.933 [2024-11-19 03:13:54.822658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:03.933 [2024-11-19 03:13:54.822680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.822709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.822745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.822761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.822784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.822801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.822839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.822855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.822877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.822892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.822914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.822929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.822951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.822967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.823781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.823798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.824070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.824092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.824154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.824176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.824204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.824235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.824264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.824280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.824307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.824324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.824351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.824367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.824394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.824411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.824437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.824454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.824481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.824497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.824530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.824547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.824574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.934 [2024-11-19 03:13:54.824591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:03.934 [2024-11-19 03:13:54.824636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.824652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.824701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.824719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.824761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.824778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.824805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.824821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.824847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.824863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.824888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.824904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.824930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.824946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.824972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.824988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.935 [2024-11-19 03:13:54.825837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.825962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.825988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.826020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.826046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.826061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.826087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.826102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.826127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.935 [2024-11-19 03:13:54.826143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:03.935 [2024-11-19 03:13:54.826168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:13:54.826188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:13:54.826214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:13:54.826230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:13:54.826255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:13:54.826271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:13:54.826296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:13:54.826311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:13:54.826337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:13:54.826353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:13:54.826378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:13:54.826394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:13:54.826419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:13:54.826435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:13:54.826461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:13:54.826477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:13:54.826502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:13:54.826517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:13:54.826543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:13:54.826559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:13:54.826738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:13:54.826760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:03.936 7970.31 IOPS, 31.13 MiB/s [2024-11-19T02:14:14.551Z] 7501.47 IOPS, 29.30 MiB/s [2024-11-19T02:14:14.551Z] 7084.72 IOPS, 27.67 MiB/s [2024-11-19T02:14:14.551Z] 6711.84 IOPS, 26.22 MiB/s [2024-11-19T02:14:14.551Z] 6785.25 IOPS, 26.50 MiB/s [2024-11-19T02:14:14.551Z] 6873.29 IOPS, 26.85 MiB/s [2024-11-19T02:14:14.551Z] 6980.41 IOPS, 27.27 MiB/s [2024-11-19T02:14:14.551Z] 7169.35 IOPS, 28.01 MiB/s [2024-11-19T02:14:14.551Z] 7349.29 IOPS, 28.71 MiB/s [2024-11-19T02:14:14.551Z] 7497.48 IOPS, 29.29 MiB/s [2024-11-19T02:14:14.551Z] 7531.46 IOPS, 29.42 MiB/s [2024-11-19T02:14:14.551Z] 7560.78 IOPS, 29.53 MiB/s [2024-11-19T02:14:14.551Z] 7592.89 IOPS, 29.66 MiB/s [2024-11-19T02:14:14.551Z] 7667.93 IOPS, 29.95 MiB/s [2024-11-19T02:14:14.551Z] 7777.73 IOPS, 30.38 MiB/s [2024-11-19T02:14:14.551Z] 7879.48 IOPS, 30.78 MiB/s [2024-11-19T02:14:14.551Z] [2024-11-19 03:14:11.430195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.430268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.430312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.430331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.430356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.430374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.430398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.430415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.430439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.430455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.430479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.430496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.430519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.430536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.430558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.430574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.430596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.430612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.430635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.430651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.430673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.430698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.430740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.430757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.430779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.430804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.430827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.430843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.430865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.430881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.430902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.430918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.430940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.430972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.430996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.431028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.431052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.936 [2024-11-19 03:14:11.431068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.431091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.936 [2024-11-19 03:14:11.431108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.431130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.936 [2024-11-19 03:14:11.431147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.431169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.936 [2024-11-19 03:14:11.431186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.431209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.431225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.431248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.431264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:03.936 [2024-11-19 03:14:11.431286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.936 [2024-11-19 03:14:11.431306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.431345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.431362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.431385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.431402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.431424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.431441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.431463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.431479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.431502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.431518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.431541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.431557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.431581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.431597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.431619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.431636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.431658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.431674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.431722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.431750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.431772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.431787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.431824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.431841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.431868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.431885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.431908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.431925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.432400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.432424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.432451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.432469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.432492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.432509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.432532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.432548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.432571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.432587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.432610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.432626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.432648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.432665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.432687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.432715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.432739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.937 [2024-11-19 03:14:11.432755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.432778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.432794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.432822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.432839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.432862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.432878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.432900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.432916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.432938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.432955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.432991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.433008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.433031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.433062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.433084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.433099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.433120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.433134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.433156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.433170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.433191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.433206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.433227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.433243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.433263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.433278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.433300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.433319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.433357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.937 [2024-11-19 03:14:11.433374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:03.937 [2024-11-19 03:14:11.433396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.938 [2024-11-19 03:14:11.433411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.434449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.938 [2024-11-19 03:14:11.434474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.434502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.938 [2024-11-19 03:14:11.434519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.434542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.938 [2024-11-19 03:14:11.434559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.434581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.938 [2024-11-19 03:14:11.434612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.434636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.938 [2024-11-19 03:14:11.434651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.434694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.938 [2024-11-19 03:14:11.434713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.434752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.938 [2024-11-19 03:14:11.434769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.434790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.938 [2024-11-19 03:14:11.434815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.434836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.938 [2024-11-19 03:14:11.434852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.434873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.434893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.434915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.434931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.434953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.938 [2024-11-19 03:14:11.434968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.434990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.938 [2024-11-19 03:14:11.435005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.435040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.435056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.435078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.435093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.435114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.435128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.435149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.435164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.435185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.435199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.435220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.435235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.437170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.437212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.437252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.437295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.437330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.437366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.437402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.437438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.437474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.437510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.437547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.437583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.437620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.938 [2024-11-19 03:14:11.437656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.938 [2024-11-19 03:14:11.437720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.938 [2024-11-19 03:14:11.437770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.938 [2024-11-19 03:14:11.437808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.938 [2024-11-19 03:14:11.437845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.938 [2024-11-19 03:14:11.437882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:03.938 [2024-11-19 03:14:11.437905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.437920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.437942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.437958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.437995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.438010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.438030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.438045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.438074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.438089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.438110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.438125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.438146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.438160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.438182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.438197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.438218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.438237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.438259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.438274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.438294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.438310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.438331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.438346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.438367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.438382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.438404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.438419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.440990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.441029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.441076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.441114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.441153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.441191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.441227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.441270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.441308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.441361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.441415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.441455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.441493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.441533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.441571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.441611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.441649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.441687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.441743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.939 [2024-11-19 03:14:11.441797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.441841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.441878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.441915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.441953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.441990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.442012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:03.939 [2024-11-19 03:14:11.442033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.939 [2024-11-19 03:14:11.442048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.442069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.940 [2024-11-19 03:14:11.442084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.442105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.940 [2024-11-19 03:14:11.442120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.442141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.940 [2024-11-19 03:14:11.442157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.442178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.940 [2024-11-19 03:14:11.442193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.442214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.940 [2024-11-19 03:14:11.442229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.442250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.940 [2024-11-19 03:14:11.442272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.442301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.940 [2024-11-19 03:14:11.442317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.442339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.940 [2024-11-19 03:14:11.442354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.442392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.940 [2024-11-19 03:14:11.442410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.442433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.940 [2024-11-19 03:14:11.442449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.443232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.940 [2024-11-19 03:14:11.443256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.443299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.940 [2024-11-19 03:14:11.443316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.443354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.940 [2024-11-19 03:14:11.443370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.443392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.940 [2024-11-19 03:14:11.443408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.443446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.940 [2024-11-19 03:14:11.443461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.443484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.940 [2024-11-19 03:14:11.443500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.443521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.940 [2024-11-19 03:14:11.443537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.443559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.940 [2024-11-19 03:14:11.443575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.443602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.940 [2024-11-19 03:14:11.443619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.443641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.940 [2024-11-19 03:14:11.443657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.443678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.940 [2024-11-19 03:14:11.443719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.443754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.940 [2024-11-19 03:14:11.443787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.443810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.940 [2024-11-19 03:14:11.443826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.443848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.940 [2024-11-19 03:14:11.443864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.443886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.940 [2024-11-19 03:14:11.443902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.443924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.940 [2024-11-19 03:14:11.443940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.443962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.940 [2024-11-19 03:14:11.443977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.444000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.940 [2024-11-19 03:14:11.444027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.444383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.940 [2024-11-19 03:14:11.444407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.444434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.940 [2024-11-19 03:14:11.444453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.444476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.940 [2024-11-19 03:14:11.444498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.444522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.940 [2024-11-19 03:14:11.444539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.444562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.940 [2024-11-19 03:14:11.444578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:03.940 [2024-11-19 03:14:11.444600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.940 [2024-11-19 03:14:11.444617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.444640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.941 [2024-11-19 03:14:11.444671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.444701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.941 [2024-11-19 03:14:11.444739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.444763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.941 [2024-11-19 03:14:11.444780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.444802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.941 [2024-11-19 03:14:11.444818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.444841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.941 [2024-11-19 03:14:11.444858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.444880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.444896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.444919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.444935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.444958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.444975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.444998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.445033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.445057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.941 [2024-11-19 03:14:11.445089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.445112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.941 [2024-11-19 03:14:11.445128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.445165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.941 [2024-11-19 03:14:11.445181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.445203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.941 [2024-11-19 03:14:11.445219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.445241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.941 [2024-11-19 03:14:11.445257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.445278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.445294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.445316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.445333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.445354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.445370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.445392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.445408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.445430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.445461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.445482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.445498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.445518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.941 [2024-11-19 03:14:11.445534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.445560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.941 [2024-11-19 03:14:11.445577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.446527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.446552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.446580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.446598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.446620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.446636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.446659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.446675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.446734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.446753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.446777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.446793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.446816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.446832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.446855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.941 [2024-11-19 03:14:11.446871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.446894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.446910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.446932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.446948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.446970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.941 [2024-11-19 03:14:11.446995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.447025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.941 [2024-11-19 03:14:11.447043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.447065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.447082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.447105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.447121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.447143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.941 [2024-11-19 03:14:11.447159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.447182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.941 [2024-11-19 03:14:11.447213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:03.941 [2024-11-19 03:14:11.447236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.447252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.447273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.447289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.447311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.447326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.447348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.447363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.447384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.942 [2024-11-19 03:14:11.447399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.447421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.942 [2024-11-19 03:14:11.447436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.447457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.447472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.447494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.447514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.449351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.449376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.449405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.942 [2024-11-19 03:14:11.449423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.449446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.942 [2024-11-19 03:14:11.449464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.449487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.942 [2024-11-19 03:14:11.449504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.449542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.449558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.449581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.942 [2024-11-19 03:14:11.449597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.449619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.449651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.449673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.449714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.449744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.449761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.449784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.449800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.449822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.942 [2024-11-19 03:14:11.449838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.449859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.942 [2024-11-19 03:14:11.449896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.449921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.942 [2024-11-19 03:14:11.449938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.449961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.942 [2024-11-19 03:14:11.449977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.450000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.942 [2024-11-19 03:14:11.450031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.450054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.942 [2024-11-19 03:14:11.450070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.450091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.450107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.450130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.942 [2024-11-19 03:14:11.450146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.450168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.450199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.450221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.942 [2024-11-19 03:14:11.450237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.450258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.450274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.450295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.450311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.450332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.450349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.450371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.942 [2024-11-19 03:14:11.450387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.450414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.450431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.453930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.453956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.453998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.454017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.454040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.454057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.454080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.454096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.454118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.454134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:03.942 [2024-11-19 03:14:11.454156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.942 [2024-11-19 03:14:11.454173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.943 [2024-11-19 03:14:11.454212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.943 [2024-11-19 03:14:11.454264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.943 [2024-11-19 03:14:11.454302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.943 [2024-11-19 03:14:11.454339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.943 [2024-11-19 03:14:11.454378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.943 [2024-11-19 03:14:11.454421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.454459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.454495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.454533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.454589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.454628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.454667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.454718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.454759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.454799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.454838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.454877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.943 [2024-11-19 03:14:11.454921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.943 [2024-11-19 03:14:11.454962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.454985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.455002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.455057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.455095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.455133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.455171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.943 [2024-11-19 03:14:11.455224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.455264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.455301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.455339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.455376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.943 [2024-11-19 03:14:11.455417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.943 [2024-11-19 03:14:11.455456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.943 [2024-11-19 03:14:11.455492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.943 [2024-11-19 03:14:11.455528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.943 [2024-11-19 03:14:11.455567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.943 [2024-11-19 03:14:11.455604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.943 [2024-11-19 03:14:11.455641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.943 [2024-11-19 03:14:11.455679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.943 [2024-11-19 03:14:11.455742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:03.943 [2024-11-19 03:14:11.455765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.455781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.455803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.455819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.455841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.944 [2024-11-19 03:14:11.455857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.455895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.944 [2024-11-19 03:14:11.455913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.455940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.944 [2024-11-19 03:14:11.455958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.455981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.944 [2024-11-19 03:14:11.456013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.456037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.456052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.456074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.456090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.456112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.456145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.456169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.944 [2024-11-19 03:14:11.456186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.457925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.944 [2024-11-19 03:14:11.457950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.457977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.944 [2024-11-19 03:14:11.457996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.458019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.458036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.458074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.458091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.458113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.458128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.458150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.458166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.458193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.458210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.458233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.458250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.458273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.458289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.458326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.944 [2024-11-19 03:14:11.458342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.458365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.944 [2024-11-19 03:14:11.458381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.458403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.944 [2024-11-19 03:14:11.458419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.458441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.944 [2024-11-19 03:14:11.458456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.458479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.944 [2024-11-19 03:14:11.458495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.458517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.944 [2024-11-19 03:14:11.458532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.458554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.944 [2024-11-19 03:14:11.458570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.459554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.459578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.459605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.459623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.459652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.459670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.459715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.459738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.459762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.459778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.459802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.944 [2024-11-19 03:14:11.459819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.459842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.944 [2024-11-19 03:14:11.459859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.459882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.944 [2024-11-19 03:14:11.459898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:03.944 [2024-11-19 03:14:11.459921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.944 [2024-11-19 03:14:11.459937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.459960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.459977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.459999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.460024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.945 [2024-11-19 03:14:11.460063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.460102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.460158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.460218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.460258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.460295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.460332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.945 [2024-11-19 03:14:11.460369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.945 [2024-11-19 03:14:11.460404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.945 [2024-11-19 03:14:11.460441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.945 [2024-11-19 03:14:11.460478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.945 [2024-11-19 03:14:11.460514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.460551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.460589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.945 [2024-11-19 03:14:11.460625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.460667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.460737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.945 [2024-11-19 03:14:11.460776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.945 [2024-11-19 03:14:11.460815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.460838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.945 [2024-11-19 03:14:11.460854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.461509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.945 [2024-11-19 03:14:11.461534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.461561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.461579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.461619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.461636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.461658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.461674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.461723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.461742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.461766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.461783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.461806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.461822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.461844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.461860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.461889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.461906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.461929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.461952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.461975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.461991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.462013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.462029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.462050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.945 [2024-11-19 03:14:11.462066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.462089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.945 [2024-11-19 03:14:11.462105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.462127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.945 [2024-11-19 03:14:11.462143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.462165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.945 [2024-11-19 03:14:11.462181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:03.945 [2024-11-19 03:14:11.462203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.462219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.462241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.462258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.462281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.462298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.463287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.946 [2024-11-19 03:14:11.463311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.463343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.946 [2024-11-19 03:14:11.463362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.463385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.946 [2024-11-19 03:14:11.463401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.463423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.946 [2024-11-19 03:14:11.463440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.463462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.946 [2024-11-19 03:14:11.463479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.463501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.946 [2024-11-19 03:14:11.463518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.463556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.463573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.463596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.463612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.463634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.463650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.463671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.946 [2024-11-19 03:14:11.463694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.463736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.946 [2024-11-19 03:14:11.463753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.463775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.946 [2024-11-19 03:14:11.463792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.463814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.463831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.463853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.463874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.463898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.946 [2024-11-19 03:14:11.463915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.463937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.463954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.463976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.463992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.464040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.946 [2024-11-19 03:14:11.464080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.946 [2024-11-19 03:14:11.464119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.464159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.946 [2024-11-19 03:14:11.464197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.464236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.946 [2024-11-19 03:14:11.464276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.464331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.464373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.464413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.464451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.464490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.464528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.464566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.464604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.464641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.464703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.946 [2024-11-19 03:14:11.464745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:03.946 [2024-11-19 03:14:11.464768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.946 [2024-11-19 03:14:11.464784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.464808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.464825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.467404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.467430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.467478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.467497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.467520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.467537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.467559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.467575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.467597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.467613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.467635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.467667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.467698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.467717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.467747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.467763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.467786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.467802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.467825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.467847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.467870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.467887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.467909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.467926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.467948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.467964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.467992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.468009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.468032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.468049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.468071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.468087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.468110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.468127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.468150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.468167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.468189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.468206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.468229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.468246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.468268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.468284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.468307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.468323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.468346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.468362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.468401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.468417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.468439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.468455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.468477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.468498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.468521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.468537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.468574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.468591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.468615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.468632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.469141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.469165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.469193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.947 [2024-11-19 03:14:11.469211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.469235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.469251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.469274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.469306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.469329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.469355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.469378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.469394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.469416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.469432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.469454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.469470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:03.947 [2024-11-19 03:14:11.469491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.947 [2024-11-19 03:14:11.469512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.469536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.469551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.469573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.469589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.469611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.469627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.469648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.469679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.469713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.469733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.469756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.469773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.469795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.469812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.469834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.469850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.469873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.469889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.469912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.469928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.469950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.469967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.471261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.471313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.471354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.471393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.471433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.471473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.471526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.471565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.471603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.471641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.471704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.471755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.471794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.471839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.471878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.471916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.471955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.471993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.472011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.472049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.472065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.472086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.472102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.472123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.472139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.472160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.948 [2024-11-19 03:14:11.472176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.472197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.472212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.472234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.472249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.472271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.472286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.472311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.472328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.472350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.948 [2024-11-19 03:14:11.472365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:03.948 [2024-11-19 03:14:11.472386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.472401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.472423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.472438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.472460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.949 [2024-11-19 03:14:11.472475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.472497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.949 [2024-11-19 03:14:11.472512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.474884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.474909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.474937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.474956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.474995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.475012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.475067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.475105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.475142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.475184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.475221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.475275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.949 [2024-11-19 03:14:11.475313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.949 [2024-11-19 03:14:11.475351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.949 [2024-11-19 03:14:11.475390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.949 [2024-11-19 03:14:11.475428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.475484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.475523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.949 [2024-11-19 03:14:11.475562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.949 [2024-11-19 03:14:11.475601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.475641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.949 [2024-11-19 03:14:11.475684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.475736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.949 [2024-11-19 03:14:11.475776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.949 [2024-11-19 03:14:11.475815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.949 [2024-11-19 03:14:11.475855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.475894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.475932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.475955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.475986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.476020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.949 [2024-11-19 03:14:11.476036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.476057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.949 [2024-11-19 03:14:11.476089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.476112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.949 [2024-11-19 03:14:11.476129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.477283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.949 [2024-11-19 03:14:11.477308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:03.949 [2024-11-19 03:14:11.477335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.477353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.477385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.477403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.477426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.477442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.477465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.477482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.477505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.477522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.477544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.477561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.477584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.477600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.477623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.477640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.477662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.950 [2024-11-19 03:14:11.477678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.477711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.950 [2024-11-19 03:14:11.477730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.477753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.950 [2024-11-19 03:14:11.477769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.477792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.950 [2024-11-19 03:14:11.477809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.477832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.950 [2024-11-19 03:14:11.477849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.477876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.950 [2024-11-19 03:14:11.477893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.477915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.950 [2024-11-19 03:14:11.477932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.477954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.477971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.477994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.478010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.478033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.478049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.478072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.950 [2024-11-19 03:14:11.478089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.478111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.950 [2024-11-19 03:14:11.478128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.478151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.478173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.478212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.478228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.478249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.478266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.478288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.478303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.478325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.950 [2024-11-19 03:14:11.478341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.478363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.950 [2024-11-19 03:14:11.478383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.479255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.479279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.479306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.950 [2024-11-19 03:14:11.479324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.479347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.479363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.479386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.479402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.479442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.950 [2024-11-19 03:14:11.479459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.479482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.479498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.479520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.479537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.479560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.950 [2024-11-19 03:14:11.479577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.480752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.950 [2024-11-19 03:14:11.480776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.480805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.950 [2024-11-19 03:14:11.480824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.480847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.480864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.480887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.480909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:03.950 [2024-11-19 03:14:11.480933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.950 [2024-11-19 03:14:11.480950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.480972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.951 [2024-11-19 03:14:11.481005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.951 [2024-11-19 03:14:11.481044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.951 [2024-11-19 03:14:11.481096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.481150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.481190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.481229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.951 [2024-11-19 03:14:11.481269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.951 [2024-11-19 03:14:11.481308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.951 [2024-11-19 03:14:11.481347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.951 [2024-11-19 03:14:11.481387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.481426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.481471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.481511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.481550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.951 [2024-11-19 03:14:11.481605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.481643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.951 [2024-11-19 03:14:11.481680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.951 [2024-11-19 03:14:11.481762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.481801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.951 [2024-11-19 03:14:11.481839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.951 [2024-11-19 03:14:11.481879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.481918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.481956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.481999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.482017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.482039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.482069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.482091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.482107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.482128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.482143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.482164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.482179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.482205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.951 [2024-11-19 03:14:11.482221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.482242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.951 [2024-11-19 03:14:11.491757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.491802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.491821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.493487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.493514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.493561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.493597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.493621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.493636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.493659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.493675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.493721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.493746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.493786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.951 [2024-11-19 03:14:11.493803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.493825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.951 [2024-11-19 03:14:11.493841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:03.951 [2024-11-19 03:14:11.493878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.952 [2024-11-19 03:14:11.493894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:03.952 [2024-11-19 03:14:11.493918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.952 [2024-11-19 03:14:11.493934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:03.952 [2024-11-19 03:14:11.493956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.952 [2024-11-19 03:14:11.493972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:03.952 [2024-11-19 03:14:11.493994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.952 [2024-11-19 03:14:11.494011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:03.952 [2024-11-19 03:14:11.494033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.952 [2024-11-19 03:14:11.494050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:03.952 [2024-11-19 03:14:11.494072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.952 [2024-11-19 03:14:11.494088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:03.952 [2024-11-19 03:14:11.494114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.952 [2024-11-19 03:14:11.494131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:03.952 7939.81 IOPS, 31.01 MiB/s [2024-11-19T02:14:14.567Z] 7958.64 IOPS, 31.09 MiB/s [2024-11-19T02:14:14.567Z] 7975.88 IOPS, 31.16 MiB/s [2024-11-19T02:14:14.567Z] Received shutdown signal, test time was about 34.345231 seconds 00:33:03.952 00:33:03.952 Latency(us) 00:33:03.952 [2024-11-19T02:14:14.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.952 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:03.952 Verification LBA range: start 0x0 length 0x4000 00:33:03.952 Nvme0n1 : 34.34 7978.50 31.17 0.00 0.00 16013.10 371.67 4101097.24 00:33:03.952 [2024-11-19T02:14:14.567Z] =================================================================================================================== 00:33:03.952 [2024-11-19T02:14:14.567Z] Total : 7978.50 31.17 0.00 0.00 16013.10 371.67 4101097.24 00:33:03.952 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:04.216 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:04.216 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:04.216 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:04.216 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:04.216 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:04.216 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:04.216 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:04.216 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:04.216 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:04.216 rmmod nvme_tcp 00:33:04.216 rmmod nvme_fabrics 00:33:04.216 rmmod nvme_keyring 00:33:04.216 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:04.216 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:04.216 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:04.216 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 369829 ']' 00:33:04.217 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 369829 00:33:04.217 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 369829 ']' 00:33:04.217 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 369829 00:33:04.217 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:04.217 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:04.217 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 369829 00:33:04.217 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:04.217 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:04.217 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 369829' 00:33:04.217 killing process with pid 369829 00:33:04.217 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 369829 00:33:04.217 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 369829 00:33:04.476 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:04.476 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:04.476 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:04.476 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:04.476 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:04.476 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:04.476 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:04.476 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:04.476 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:04.476 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.476 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:04.476 03:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.383 03:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:06.383 00:33:06.383 real 0m43.429s 00:33:06.383 user 2m12.362s 00:33:06.383 sys 0m10.848s 00:33:06.383 03:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:06.383 03:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:06.383 ************************************ 00:33:06.383 END TEST nvmf_host_multipath_status 00:33:06.383 ************************************ 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.643 ************************************ 00:33:06.643 START TEST nvmf_discovery_remove_ifc 00:33:06.643 ************************************ 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:06.643 * Looking for test storage... 00:33:06.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:06.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.643 --rc genhtml_branch_coverage=1 00:33:06.643 --rc genhtml_function_coverage=1 00:33:06.643 --rc genhtml_legend=1 00:33:06.643 --rc geninfo_all_blocks=1 00:33:06.643 --rc geninfo_unexecuted_blocks=1 00:33:06.643 00:33:06.643 ' 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:06.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.643 --rc genhtml_branch_coverage=1 00:33:06.643 --rc genhtml_function_coverage=1 00:33:06.643 --rc genhtml_legend=1 00:33:06.643 --rc geninfo_all_blocks=1 00:33:06.643 --rc geninfo_unexecuted_blocks=1 00:33:06.643 00:33:06.643 ' 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:06.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.643 --rc genhtml_branch_coverage=1 00:33:06.643 --rc genhtml_function_coverage=1 00:33:06.643 --rc genhtml_legend=1 00:33:06.643 --rc geninfo_all_blocks=1 00:33:06.643 --rc geninfo_unexecuted_blocks=1 00:33:06.643 00:33:06.643 ' 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:06.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.643 --rc genhtml_branch_coverage=1 00:33:06.643 --rc genhtml_function_coverage=1 00:33:06.643 --rc genhtml_legend=1 00:33:06.643 --rc geninfo_all_blocks=1 00:33:06.643 --rc geninfo_unexecuted_blocks=1 00:33:06.643 00:33:06.643 ' 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:06.643 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:06.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:06.644 03:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:09.178 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.178 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:09.178 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:09.179 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:09.179 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:09.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:09.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:33:09.179 00:33:09.179 --- 10.0.0.2 ping statistics --- 00:33:09.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.179 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:09.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:09.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:33:09.179 00:33:09.179 --- 10.0.0.1 ping statistics --- 00:33:09.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.179 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=376464 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 376464 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 376464 ']' 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:09.179 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:09.179 [2024-11-19 03:14:19.634431] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:33:09.179 [2024-11-19 03:14:19.634518] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:09.179 [2024-11-19 03:14:19.708595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.179 [2024-11-19 03:14:19.754681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:09.179 [2024-11-19 03:14:19.754757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:09.179 [2024-11-19 03:14:19.754772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:09.179 [2024-11-19 03:14:19.754784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:09.179 [2024-11-19 03:14:19.754795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:09.179 [2024-11-19 03:14:19.755371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.438 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:09.438 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:09.438 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:09.438 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:09.438 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:09.438 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:09.438 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:09.438 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.438 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:09.438 [2024-11-19 03:14:19.901103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:09.438 [2024-11-19 03:14:19.909302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:09.438 null0 00:33:09.438 [2024-11-19 03:14:19.941217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:09.438 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.439 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=376499 00:33:09.439 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 376499 /tmp/host.sock 00:33:09.439 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 376499 ']' 00:33:09.439 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:09.439 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:09.439 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:09.439 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:09.439 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:09.439 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:09.439 03:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:09.439 [2024-11-19 03:14:20.013533] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:33:09.439 [2024-11-19 03:14:20.013626] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376499 ] 00:33:09.697 [2024-11-19 03:14:20.087610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.697 [2024-11-19 03:14:20.135869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.697 03:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:09.697 03:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:09.697 03:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:09.697 03:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:09.697 03:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.697 03:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:09.697 03:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.697 03:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:09.697 03:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.697 03:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:09.955 03:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.955 03:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:09.955 03:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.955 03:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:10.891 [2024-11-19 03:14:21.388426] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:10.891 [2024-11-19 03:14:21.388468] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:10.891 [2024-11-19 03:14:21.388496] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:10.891 [2024-11-19 03:14:21.475778] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:11.149 [2024-11-19 03:14:21.658918] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:11.149 [2024-11-19 03:14:21.659978] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1bcdc00:1 started. 00:33:11.149 [2024-11-19 03:14:21.661587] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:11.149 [2024-11-19 03:14:21.661650] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:11.149 [2024-11-19 03:14:21.661710] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:11.149 [2024-11-19 03:14:21.661735] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:11.149 [2024-11-19 03:14:21.661772] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:11.149 [2024-11-19 03:14:21.667384] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1bcdc00 was disconnected and freed. delete nvme_qpair. 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:11.149 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:11.407 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.407 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:11.407 03:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:12.339 03:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:12.339 03:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:12.339 03:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:12.339 03:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.339 03:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:12.339 03:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:12.339 03:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:12.339 03:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.339 03:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:12.339 03:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:13.273 03:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:13.273 03:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:13.273 03:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:13.273 03:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.273 03:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:13.273 03:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:13.273 03:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:13.273 03:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.531 03:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:13.531 03:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:14.465 03:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:14.465 03:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:14.465 03:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:14.465 03:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.465 03:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:14.465 03:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:14.465 03:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:14.465 03:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.465 03:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:14.465 03:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:15.400 03:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:15.400 03:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:15.400 03:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:15.400 03:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.400 03:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:15.400 03:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:15.400 03:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:15.400 03:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.401 03:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:15.401 03:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:16.775 03:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:16.775 03:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:16.775 03:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:16.775 03:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.775 03:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:16.775 03:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:16.775 03:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:16.775 03:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.775 03:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:16.775 03:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:16.775 [2024-11-19 03:14:27.103218] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:16.775 [2024-11-19 03:14:27.103300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:16.775 [2024-11-19 03:14:27.103325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:16.775 [2024-11-19 03:14:27.103354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:16.775 [2024-11-19 03:14:27.103368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:16.775 [2024-11-19 03:14:27.103383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:16.776 [2024-11-19 03:14:27.103396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:16.776 [2024-11-19 03:14:27.103409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:16.776 [2024-11-19 03:14:27.103422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:16.776 [2024-11-19 03:14:27.103435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:16.776 [2024-11-19 03:14:27.103449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:16.776 [2024-11-19 03:14:27.103462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baa400 is same with the state(6) to be set 00:33:16.776 [2024-11-19 03:14:27.113232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baa400 (9): Bad file descriptor 00:33:16.776 [2024-11-19 03:14:27.123279] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:16.776 [2024-11-19 03:14:27.123304] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:16.776 [2024-11-19 03:14:27.123314] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:16.776 [2024-11-19 03:14:27.123324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:16.776 [2024-11-19 03:14:27.123365] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:17.709 03:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:17.709 03:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:17.709 03:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.709 03:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:17.709 03:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:17.709 03:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:17.709 03:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:17.709 [2024-11-19 03:14:28.136724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:17.709 [2024-11-19 03:14:28.136781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baa400 with addr=10.0.0.2, port=4420 00:33:17.709 [2024-11-19 03:14:28.136805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baa400 is same with the state(6) to be set 00:33:17.709 [2024-11-19 03:14:28.136847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baa400 (9): Bad file descriptor 00:33:17.709 [2024-11-19 03:14:28.137312] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:17.709 [2024-11-19 03:14:28.137354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:17.709 [2024-11-19 03:14:28.137371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:17.709 [2024-11-19 03:14:28.137386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:17.709 [2024-11-19 03:14:28.137399] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:17.709 [2024-11-19 03:14:28.137410] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:17.709 [2024-11-19 03:14:28.137418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:17.709 [2024-11-19 03:14:28.137432] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:17.709 [2024-11-19 03:14:28.137441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:17.709 03:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.709 03:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:17.709 03:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:18.645 [2024-11-19 03:14:29.139931] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:18.645 [2024-11-19 03:14:29.139958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:18.645 [2024-11-19 03:14:29.139978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:18.645 [2024-11-19 03:14:29.139994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:18.645 [2024-11-19 03:14:29.140006] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:18.645 [2024-11-19 03:14:29.140036] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:18.645 [2024-11-19 03:14:29.140045] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:18.645 [2024-11-19 03:14:29.140052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:18.645 [2024-11-19 03:14:29.140111] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:18.646 [2024-11-19 03:14:29.140164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.646 [2024-11-19 03:14:29.140186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.646 [2024-11-19 03:14:29.140205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.646 [2024-11-19 03:14:29.140218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.646 [2024-11-19 03:14:29.140232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.646 [2024-11-19 03:14:29.140245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.646 [2024-11-19 03:14:29.140258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.646 [2024-11-19 03:14:29.140271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.646 [2024-11-19 03:14:29.140284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.646 [2024-11-19 03:14:29.140296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.646 [2024-11-19 03:14:29.140309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:18.646 [2024-11-19 03:14:29.140365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b99b40 (9): Bad file descriptor 00:33:18.646 [2024-11-19 03:14:29.141370] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:18.646 [2024-11-19 03:14:29.141391] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:18.646 03:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:20.019 03:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:20.019 03:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:20.019 03:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:20.019 03:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.019 03:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:20.019 03:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:20.019 03:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:20.019 03:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.019 03:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:20.019 03:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:20.954 [2024-11-19 03:14:31.235850] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:20.954 [2024-11-19 03:14:31.235895] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:20.954 [2024-11-19 03:14:31.235919] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:20.954 03:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:20.954 03:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:20.954 03:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:20.954 03:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.954 03:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:20.954 03:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:20.954 03:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:20.954 03:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.954 03:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:20.954 03:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:20.954 [2024-11-19 03:14:31.362322] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:21.212 [2024-11-19 03:14:31.583807] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:33:21.212 [2024-11-19 03:14:31.584784] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1bac720:1 started. 00:33:21.212 [2024-11-19 03:14:31.586171] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:21.212 [2024-11-19 03:14:31.586219] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:21.212 [2024-11-19 03:14:31.586252] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:21.212 [2024-11-19 03:14:31.586282] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:21.212 [2024-11-19 03:14:31.586297] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:21.212 [2024-11-19 03:14:31.593567] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1bac720 was disconnected and freed. delete nvme_qpair. 00:33:21.776 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:21.776 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:21.776 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.776 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:21.776 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:21.776 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:21.776 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:21.776 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 376499 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 376499 ']' 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 376499 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376499 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376499' 00:33:22.035 killing process with pid 376499 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 376499 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 376499 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:22.035 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:22.035 rmmod nvme_tcp 00:33:22.035 rmmod nvme_fabrics 00:33:22.294 rmmod nvme_keyring 00:33:22.294 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:22.294 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:22.294 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:22.294 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 376464 ']' 00:33:22.294 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 376464 00:33:22.294 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 376464 ']' 00:33:22.294 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 376464 00:33:22.294 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:22.294 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:22.294 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376464 00:33:22.294 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:22.294 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:22.294 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376464' 00:33:22.294 killing process with pid 376464 00:33:22.294 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 376464 00:33:22.294 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 376464 00:33:22.555 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:22.555 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:22.555 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:22.555 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:22.555 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:33:22.555 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:33:22.555 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:22.555 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:22.555 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:22.555 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.555 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.555 03:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.468 03:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:24.468 00:33:24.468 real 0m17.912s 00:33:24.468 user 0m25.749s 00:33:24.468 sys 0m3.235s 00:33:24.468 03:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:24.468 03:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:24.468 ************************************ 00:33:24.468 END TEST nvmf_discovery_remove_ifc 00:33:24.468 ************************************ 00:33:24.468 03:14:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:24.468 03:14:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:24.468 03:14:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:24.468 03:14:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.468 ************************************ 00:33:24.468 START TEST nvmf_identify_kernel_target 00:33:24.468 ************************************ 00:33:24.468 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:24.468 * Looking for test storage... 00:33:24.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:24.468 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:24.468 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:33:24.468 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:24.727 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:24.727 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:24.727 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:24.727 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:24.727 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:24.727 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:24.727 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:24.727 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:24.727 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:24.727 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:24.727 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:24.727 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:24.727 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:24.727 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:24.727 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:24.727 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:24.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.728 --rc genhtml_branch_coverage=1 00:33:24.728 --rc genhtml_function_coverage=1 00:33:24.728 --rc genhtml_legend=1 00:33:24.728 --rc geninfo_all_blocks=1 00:33:24.728 --rc geninfo_unexecuted_blocks=1 00:33:24.728 00:33:24.728 ' 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:24.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.728 --rc genhtml_branch_coverage=1 00:33:24.728 --rc genhtml_function_coverage=1 00:33:24.728 --rc genhtml_legend=1 00:33:24.728 --rc geninfo_all_blocks=1 00:33:24.728 --rc geninfo_unexecuted_blocks=1 00:33:24.728 00:33:24.728 ' 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:24.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.728 --rc genhtml_branch_coverage=1 00:33:24.728 --rc genhtml_function_coverage=1 00:33:24.728 --rc genhtml_legend=1 00:33:24.728 --rc geninfo_all_blocks=1 00:33:24.728 --rc geninfo_unexecuted_blocks=1 00:33:24.728 00:33:24.728 ' 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:24.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.728 --rc genhtml_branch_coverage=1 00:33:24.728 --rc genhtml_function_coverage=1 00:33:24.728 --rc genhtml_legend=1 00:33:24.728 --rc geninfo_all_blocks=1 00:33:24.728 --rc geninfo_unexecuted_blocks=1 00:33:24.728 00:33:24.728 ' 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:24.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:24.728 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:24.729 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:24.729 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:24.729 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:24.729 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.729 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:24.729 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.729 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:24.729 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:24.729 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:24.729 03:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:26.633 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:26.633 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:26.633 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:26.633 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:26.633 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:26.634 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:26.634 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:26.634 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:26.634 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:26.634 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:26.634 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:26.634 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:26.893 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:26.893 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:26.893 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:26.893 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:26.893 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:26.893 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:26.893 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:26.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:26.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:33:26.894 00:33:26.894 --- 10.0.0.2 ping statistics --- 00:33:26.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.894 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:26.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:26.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:33:26.894 00:33:26.894 --- 10.0.0.1 ping statistics --- 00:33:26.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.894 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:26.894 03:14:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:28.273 Waiting for block devices as requested 00:33:28.273 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:28.273 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:28.273 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:28.531 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:28.531 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:28.531 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:28.798 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:28.798 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:28.799 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:28.799 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:29.061 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:29.061 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:29.061 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:29.061 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:29.318 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:29.318 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:29.318 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:29.577 03:14:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:29.577 03:14:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:29.577 03:14:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:29.577 03:14:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:29.577 03:14:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:29.577 03:14:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:29.577 03:14:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:29.577 03:14:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:29.577 03:14:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:29.577 No valid GPT data, bailing 00:33:29.577 03:14:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:29.577 03:14:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:29.577 03:14:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:29.577 03:14:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:29.577 03:14:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:29.577 03:14:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:29.577 03:14:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:29.577 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:29.577 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:29.577 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:33:29.577 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:29.577 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:33:29.577 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:29.577 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:33:29.577 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:33:29.577 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:33:29.577 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:29.577 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:29.577 00:33:29.577 Discovery Log Number of Records 2, Generation counter 2 00:33:29.577 =====Discovery Log Entry 0====== 00:33:29.577 trtype: tcp 00:33:29.577 adrfam: ipv4 00:33:29.577 subtype: current discovery subsystem 00:33:29.577 treq: not specified, sq flow control disable supported 00:33:29.577 portid: 1 00:33:29.577 trsvcid: 4420 00:33:29.577 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:29.577 traddr: 10.0.0.1 00:33:29.577 eflags: none 00:33:29.577 sectype: none 00:33:29.577 =====Discovery Log Entry 1====== 00:33:29.577 trtype: tcp 00:33:29.577 adrfam: ipv4 00:33:29.577 subtype: nvme subsystem 00:33:29.577 treq: not specified, sq flow control disable supported 00:33:29.577 portid: 1 00:33:29.577 trsvcid: 4420 00:33:29.577 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:29.577 traddr: 10.0.0.1 00:33:29.577 eflags: none 00:33:29.577 sectype: none 00:33:29.577 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:29.577 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:29.838 ===================================================== 00:33:29.838 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:29.838 ===================================================== 00:33:29.838 Controller Capabilities/Features 00:33:29.838 ================================ 00:33:29.838 Vendor ID: 0000 00:33:29.838 Subsystem Vendor ID: 0000 00:33:29.838 Serial Number: 3a2d32a9855df1dedd67 00:33:29.838 Model Number: Linux 00:33:29.838 Firmware Version: 6.8.9-20 00:33:29.838 Recommended Arb Burst: 0 00:33:29.838 IEEE OUI Identifier: 00 00 00 00:33:29.838 Multi-path I/O 00:33:29.838 May have multiple subsystem ports: No 00:33:29.838 May have multiple controllers: No 00:33:29.838 Associated with SR-IOV VF: No 00:33:29.838 Max Data Transfer Size: Unlimited 00:33:29.838 Max Number of Namespaces: 0 00:33:29.838 Max Number of I/O Queues: 1024 00:33:29.838 NVMe Specification Version (VS): 1.3 00:33:29.838 NVMe Specification Version (Identify): 1.3 00:33:29.838 Maximum Queue Entries: 1024 00:33:29.838 Contiguous Queues Required: No 00:33:29.838 Arbitration Mechanisms Supported 00:33:29.838 Weighted Round Robin: Not Supported 00:33:29.838 Vendor Specific: Not Supported 00:33:29.838 Reset Timeout: 7500 ms 00:33:29.838 Doorbell Stride: 4 bytes 00:33:29.838 NVM Subsystem Reset: Not Supported 00:33:29.838 Command Sets Supported 00:33:29.838 NVM Command Set: Supported 00:33:29.838 Boot Partition: Not Supported 00:33:29.838 Memory Page Size Minimum: 4096 bytes 00:33:29.838 Memory Page Size Maximum: 4096 bytes 00:33:29.838 Persistent Memory Region: Not Supported 00:33:29.838 Optional Asynchronous Events Supported 00:33:29.838 Namespace Attribute Notices: Not Supported 00:33:29.838 Firmware Activation Notices: Not Supported 00:33:29.838 ANA Change Notices: Not Supported 00:33:29.838 PLE Aggregate Log Change Notices: Not Supported 00:33:29.839 LBA Status Info Alert Notices: Not Supported 00:33:29.839 EGE Aggregate Log Change Notices: Not Supported 00:33:29.839 Normal NVM Subsystem Shutdown event: Not Supported 00:33:29.839 Zone Descriptor Change Notices: Not Supported 00:33:29.839 Discovery Log Change Notices: Supported 00:33:29.839 Controller Attributes 00:33:29.839 128-bit Host Identifier: Not Supported 00:33:29.839 Non-Operational Permissive Mode: Not Supported 00:33:29.839 NVM Sets: Not Supported 00:33:29.839 Read Recovery Levels: Not Supported 00:33:29.839 Endurance Groups: Not Supported 00:33:29.839 Predictable Latency Mode: Not Supported 00:33:29.839 Traffic Based Keep ALive: Not Supported 00:33:29.839 Namespace Granularity: Not Supported 00:33:29.839 SQ Associations: Not Supported 00:33:29.839 UUID List: Not Supported 00:33:29.839 Multi-Domain Subsystem: Not Supported 00:33:29.839 Fixed Capacity Management: Not Supported 00:33:29.839 Variable Capacity Management: Not Supported 00:33:29.839 Delete Endurance Group: Not Supported 00:33:29.839 Delete NVM Set: Not Supported 00:33:29.839 Extended LBA Formats Supported: Not Supported 00:33:29.839 Flexible Data Placement Supported: Not Supported 00:33:29.839 00:33:29.839 Controller Memory Buffer Support 00:33:29.839 ================================ 00:33:29.839 Supported: No 00:33:29.839 00:33:29.839 Persistent Memory Region Support 00:33:29.839 ================================ 00:33:29.839 Supported: No 00:33:29.839 00:33:29.839 Admin Command Set Attributes 00:33:29.839 ============================ 00:33:29.839 Security Send/Receive: Not Supported 00:33:29.839 Format NVM: Not Supported 00:33:29.839 Firmware Activate/Download: Not Supported 00:33:29.839 Namespace Management: Not Supported 00:33:29.839 Device Self-Test: Not Supported 00:33:29.839 Directives: Not Supported 00:33:29.839 NVMe-MI: Not Supported 00:33:29.839 Virtualization Management: Not Supported 00:33:29.839 Doorbell Buffer Config: Not Supported 00:33:29.839 Get LBA Status Capability: Not Supported 00:33:29.839 Command & Feature Lockdown Capability: Not Supported 00:33:29.839 Abort Command Limit: 1 00:33:29.839 Async Event Request Limit: 1 00:33:29.839 Number of Firmware Slots: N/A 00:33:29.839 Firmware Slot 1 Read-Only: N/A 00:33:29.839 Firmware Activation Without Reset: N/A 00:33:29.839 Multiple Update Detection Support: N/A 00:33:29.839 Firmware Update Granularity: No Information Provided 00:33:29.839 Per-Namespace SMART Log: No 00:33:29.839 Asymmetric Namespace Access Log Page: Not Supported 00:33:29.839 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:29.839 Command Effects Log Page: Not Supported 00:33:29.839 Get Log Page Extended Data: Supported 00:33:29.839 Telemetry Log Pages: Not Supported 00:33:29.839 Persistent Event Log Pages: Not Supported 00:33:29.839 Supported Log Pages Log Page: May Support 00:33:29.839 Commands Supported & Effects Log Page: Not Supported 00:33:29.839 Feature Identifiers & Effects Log Page:May Support 00:33:29.839 NVMe-MI Commands & Effects Log Page: May Support 00:33:29.839 Data Area 4 for Telemetry Log: Not Supported 00:33:29.839 Error Log Page Entries Supported: 1 00:33:29.839 Keep Alive: Not Supported 00:33:29.839 00:33:29.839 NVM Command Set Attributes 00:33:29.839 ========================== 00:33:29.839 Submission Queue Entry Size 00:33:29.839 Max: 1 00:33:29.839 Min: 1 00:33:29.839 Completion Queue Entry Size 00:33:29.839 Max: 1 00:33:29.839 Min: 1 00:33:29.839 Number of Namespaces: 0 00:33:29.839 Compare Command: Not Supported 00:33:29.839 Write Uncorrectable Command: Not Supported 00:33:29.839 Dataset Management Command: Not Supported 00:33:29.839 Write Zeroes Command: Not Supported 00:33:29.839 Set Features Save Field: Not Supported 00:33:29.839 Reservations: Not Supported 00:33:29.839 Timestamp: Not Supported 00:33:29.839 Copy: Not Supported 00:33:29.839 Volatile Write Cache: Not Present 00:33:29.839 Atomic Write Unit (Normal): 1 00:33:29.839 Atomic Write Unit (PFail): 1 00:33:29.839 Atomic Compare & Write Unit: 1 00:33:29.839 Fused Compare & Write: Not Supported 00:33:29.839 Scatter-Gather List 00:33:29.839 SGL Command Set: Supported 00:33:29.839 SGL Keyed: Not Supported 00:33:29.839 SGL Bit Bucket Descriptor: Not Supported 00:33:29.839 SGL Metadata Pointer: Not Supported 00:33:29.839 Oversized SGL: Not Supported 00:33:29.839 SGL Metadata Address: Not Supported 00:33:29.839 SGL Offset: Supported 00:33:29.839 Transport SGL Data Block: Not Supported 00:33:29.839 Replay Protected Memory Block: Not Supported 00:33:29.839 00:33:29.839 Firmware Slot Information 00:33:29.839 ========================= 00:33:29.839 Active slot: 0 00:33:29.839 00:33:29.839 00:33:29.839 Error Log 00:33:29.839 ========= 00:33:29.839 00:33:29.839 Active Namespaces 00:33:29.839 ================= 00:33:29.839 Discovery Log Page 00:33:29.839 ================== 00:33:29.839 Generation Counter: 2 00:33:29.839 Number of Records: 2 00:33:29.839 Record Format: 0 00:33:29.839 00:33:29.839 Discovery Log Entry 0 00:33:29.839 ---------------------- 00:33:29.839 Transport Type: 3 (TCP) 00:33:29.839 Address Family: 1 (IPv4) 00:33:29.839 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:29.839 Entry Flags: 00:33:29.839 Duplicate Returned Information: 0 00:33:29.839 Explicit Persistent Connection Support for Discovery: 0 00:33:29.839 Transport Requirements: 00:33:29.839 Secure Channel: Not Specified 00:33:29.839 Port ID: 1 (0x0001) 00:33:29.839 Controller ID: 65535 (0xffff) 00:33:29.839 Admin Max SQ Size: 32 00:33:29.839 Transport Service Identifier: 4420 00:33:29.839 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:29.839 Transport Address: 10.0.0.1 00:33:29.839 Discovery Log Entry 1 00:33:29.839 ---------------------- 00:33:29.839 Transport Type: 3 (TCP) 00:33:29.839 Address Family: 1 (IPv4) 00:33:29.839 Subsystem Type: 2 (NVM Subsystem) 00:33:29.839 Entry Flags: 00:33:29.839 Duplicate Returned Information: 0 00:33:29.839 Explicit Persistent Connection Support for Discovery: 0 00:33:29.839 Transport Requirements: 00:33:29.839 Secure Channel: Not Specified 00:33:29.839 Port ID: 1 (0x0001) 00:33:29.839 Controller ID: 65535 (0xffff) 00:33:29.839 Admin Max SQ Size: 32 00:33:29.839 Transport Service Identifier: 4420 00:33:29.839 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:29.839 Transport Address: 10.0.0.1 00:33:29.839 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:29.839 get_feature(0x01) failed 00:33:29.839 get_feature(0x02) failed 00:33:29.839 get_feature(0x04) failed 00:33:29.839 ===================================================== 00:33:29.839 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:29.839 ===================================================== 00:33:29.839 Controller Capabilities/Features 00:33:29.839 ================================ 00:33:29.839 Vendor ID: 0000 00:33:29.839 Subsystem Vendor ID: 0000 00:33:29.839 Serial Number: 295cb2442e9beb159a0f 00:33:29.839 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:29.839 Firmware Version: 6.8.9-20 00:33:29.839 Recommended Arb Burst: 6 00:33:29.839 IEEE OUI Identifier: 00 00 00 00:33:29.839 Multi-path I/O 00:33:29.839 May have multiple subsystem ports: Yes 00:33:29.839 May have multiple controllers: Yes 00:33:29.839 Associated with SR-IOV VF: No 00:33:29.839 Max Data Transfer Size: Unlimited 00:33:29.839 Max Number of Namespaces: 1024 00:33:29.839 Max Number of I/O Queues: 128 00:33:29.839 NVMe Specification Version (VS): 1.3 00:33:29.839 NVMe Specification Version (Identify): 1.3 00:33:29.839 Maximum Queue Entries: 1024 00:33:29.839 Contiguous Queues Required: No 00:33:29.839 Arbitration Mechanisms Supported 00:33:29.839 Weighted Round Robin: Not Supported 00:33:29.839 Vendor Specific: Not Supported 00:33:29.839 Reset Timeout: 7500 ms 00:33:29.839 Doorbell Stride: 4 bytes 00:33:29.839 NVM Subsystem Reset: Not Supported 00:33:29.839 Command Sets Supported 00:33:29.839 NVM Command Set: Supported 00:33:29.839 Boot Partition: Not Supported 00:33:29.839 Memory Page Size Minimum: 4096 bytes 00:33:29.839 Memory Page Size Maximum: 4096 bytes 00:33:29.839 Persistent Memory Region: Not Supported 00:33:29.839 Optional Asynchronous Events Supported 00:33:29.839 Namespace Attribute Notices: Supported 00:33:29.839 Firmware Activation Notices: Not Supported 00:33:29.839 ANA Change Notices: Supported 00:33:29.839 PLE Aggregate Log Change Notices: Not Supported 00:33:29.839 LBA Status Info Alert Notices: Not Supported 00:33:29.839 EGE Aggregate Log Change Notices: Not Supported 00:33:29.840 Normal NVM Subsystem Shutdown event: Not Supported 00:33:29.840 Zone Descriptor Change Notices: Not Supported 00:33:29.840 Discovery Log Change Notices: Not Supported 00:33:29.840 Controller Attributes 00:33:29.840 128-bit Host Identifier: Supported 00:33:29.840 Non-Operational Permissive Mode: Not Supported 00:33:29.840 NVM Sets: Not Supported 00:33:29.840 Read Recovery Levels: Not Supported 00:33:29.840 Endurance Groups: Not Supported 00:33:29.840 Predictable Latency Mode: Not Supported 00:33:29.840 Traffic Based Keep ALive: Supported 00:33:29.840 Namespace Granularity: Not Supported 00:33:29.840 SQ Associations: Not Supported 00:33:29.840 UUID List: Not Supported 00:33:29.840 Multi-Domain Subsystem: Not Supported 00:33:29.840 Fixed Capacity Management: Not Supported 00:33:29.840 Variable Capacity Management: Not Supported 00:33:29.840 Delete Endurance Group: Not Supported 00:33:29.840 Delete NVM Set: Not Supported 00:33:29.840 Extended LBA Formats Supported: Not Supported 00:33:29.840 Flexible Data Placement Supported: Not Supported 00:33:29.840 00:33:29.840 Controller Memory Buffer Support 00:33:29.840 ================================ 00:33:29.840 Supported: No 00:33:29.840 00:33:29.840 Persistent Memory Region Support 00:33:29.840 ================================ 00:33:29.840 Supported: No 00:33:29.840 00:33:29.840 Admin Command Set Attributes 00:33:29.840 ============================ 00:33:29.840 Security Send/Receive: Not Supported 00:33:29.840 Format NVM: Not Supported 00:33:29.840 Firmware Activate/Download: Not Supported 00:33:29.840 Namespace Management: Not Supported 00:33:29.840 Device Self-Test: Not Supported 00:33:29.840 Directives: Not Supported 00:33:29.840 NVMe-MI: Not Supported 00:33:29.840 Virtualization Management: Not Supported 00:33:29.840 Doorbell Buffer Config: Not Supported 00:33:29.840 Get LBA Status Capability: Not Supported 00:33:29.840 Command & Feature Lockdown Capability: Not Supported 00:33:29.840 Abort Command Limit: 4 00:33:29.840 Async Event Request Limit: 4 00:33:29.840 Number of Firmware Slots: N/A 00:33:29.840 Firmware Slot 1 Read-Only: N/A 00:33:29.840 Firmware Activation Without Reset: N/A 00:33:29.840 Multiple Update Detection Support: N/A 00:33:29.840 Firmware Update Granularity: No Information Provided 00:33:29.840 Per-Namespace SMART Log: Yes 00:33:29.840 Asymmetric Namespace Access Log Page: Supported 00:33:29.840 ANA Transition Time : 10 sec 00:33:29.840 00:33:29.840 Asymmetric Namespace Access Capabilities 00:33:29.840 ANA Optimized State : Supported 00:33:29.840 ANA Non-Optimized State : Supported 00:33:29.840 ANA Inaccessible State : Supported 00:33:29.840 ANA Persistent Loss State : Supported 00:33:29.840 ANA Change State : Supported 00:33:29.840 ANAGRPID is not changed : No 00:33:29.840 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:29.840 00:33:29.840 ANA Group Identifier Maximum : 128 00:33:29.840 Number of ANA Group Identifiers : 128 00:33:29.840 Max Number of Allowed Namespaces : 1024 00:33:29.840 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:29.840 Command Effects Log Page: Supported 00:33:29.840 Get Log Page Extended Data: Supported 00:33:29.840 Telemetry Log Pages: Not Supported 00:33:29.840 Persistent Event Log Pages: Not Supported 00:33:29.840 Supported Log Pages Log Page: May Support 00:33:29.840 Commands Supported & Effects Log Page: Not Supported 00:33:29.840 Feature Identifiers & Effects Log Page:May Support 00:33:29.840 NVMe-MI Commands & Effects Log Page: May Support 00:33:29.840 Data Area 4 for Telemetry Log: Not Supported 00:33:29.840 Error Log Page Entries Supported: 128 00:33:29.840 Keep Alive: Supported 00:33:29.840 Keep Alive Granularity: 1000 ms 00:33:29.840 00:33:29.840 NVM Command Set Attributes 00:33:29.840 ========================== 00:33:29.840 Submission Queue Entry Size 00:33:29.840 Max: 64 00:33:29.840 Min: 64 00:33:29.840 Completion Queue Entry Size 00:33:29.840 Max: 16 00:33:29.840 Min: 16 00:33:29.840 Number of Namespaces: 1024 00:33:29.840 Compare Command: Not Supported 00:33:29.840 Write Uncorrectable Command: Not Supported 00:33:29.840 Dataset Management Command: Supported 00:33:29.840 Write Zeroes Command: Supported 00:33:29.840 Set Features Save Field: Not Supported 00:33:29.840 Reservations: Not Supported 00:33:29.840 Timestamp: Not Supported 00:33:29.840 Copy: Not Supported 00:33:29.840 Volatile Write Cache: Present 00:33:29.840 Atomic Write Unit (Normal): 1 00:33:29.840 Atomic Write Unit (PFail): 1 00:33:29.840 Atomic Compare & Write Unit: 1 00:33:29.840 Fused Compare & Write: Not Supported 00:33:29.840 Scatter-Gather List 00:33:29.840 SGL Command Set: Supported 00:33:29.840 SGL Keyed: Not Supported 00:33:29.840 SGL Bit Bucket Descriptor: Not Supported 00:33:29.840 SGL Metadata Pointer: Not Supported 00:33:29.840 Oversized SGL: Not Supported 00:33:29.840 SGL Metadata Address: Not Supported 00:33:29.840 SGL Offset: Supported 00:33:29.840 Transport SGL Data Block: Not Supported 00:33:29.840 Replay Protected Memory Block: Not Supported 00:33:29.840 00:33:29.840 Firmware Slot Information 00:33:29.840 ========================= 00:33:29.840 Active slot: 0 00:33:29.840 00:33:29.840 Asymmetric Namespace Access 00:33:29.840 =========================== 00:33:29.840 Change Count : 0 00:33:29.840 Number of ANA Group Descriptors : 1 00:33:29.840 ANA Group Descriptor : 0 00:33:29.840 ANA Group ID : 1 00:33:29.840 Number of NSID Values : 1 00:33:29.840 Change Count : 0 00:33:29.840 ANA State : 1 00:33:29.840 Namespace Identifier : 1 00:33:29.840 00:33:29.840 Commands Supported and Effects 00:33:29.840 ============================== 00:33:29.840 Admin Commands 00:33:29.840 -------------- 00:33:29.840 Get Log Page (02h): Supported 00:33:29.840 Identify (06h): Supported 00:33:29.840 Abort (08h): Supported 00:33:29.840 Set Features (09h): Supported 00:33:29.840 Get Features (0Ah): Supported 00:33:29.840 Asynchronous Event Request (0Ch): Supported 00:33:29.840 Keep Alive (18h): Supported 00:33:29.840 I/O Commands 00:33:29.840 ------------ 00:33:29.840 Flush (00h): Supported 00:33:29.840 Write (01h): Supported LBA-Change 00:33:29.840 Read (02h): Supported 00:33:29.840 Write Zeroes (08h): Supported LBA-Change 00:33:29.840 Dataset Management (09h): Supported 00:33:29.840 00:33:29.840 Error Log 00:33:29.840 ========= 00:33:29.840 Entry: 0 00:33:29.840 Error Count: 0x3 00:33:29.840 Submission Queue Id: 0x0 00:33:29.840 Command Id: 0x5 00:33:29.840 Phase Bit: 0 00:33:29.840 Status Code: 0x2 00:33:29.840 Status Code Type: 0x0 00:33:29.840 Do Not Retry: 1 00:33:29.840 Error Location: 0x28 00:33:29.840 LBA: 0x0 00:33:29.840 Namespace: 0x0 00:33:29.840 Vendor Log Page: 0x0 00:33:29.840 ----------- 00:33:29.840 Entry: 1 00:33:29.840 Error Count: 0x2 00:33:29.840 Submission Queue Id: 0x0 00:33:29.840 Command Id: 0x5 00:33:29.840 Phase Bit: 0 00:33:29.840 Status Code: 0x2 00:33:29.840 Status Code Type: 0x0 00:33:29.840 Do Not Retry: 1 00:33:29.840 Error Location: 0x28 00:33:29.840 LBA: 0x0 00:33:29.840 Namespace: 0x0 00:33:29.840 Vendor Log Page: 0x0 00:33:29.840 ----------- 00:33:29.840 Entry: 2 00:33:29.840 Error Count: 0x1 00:33:29.840 Submission Queue Id: 0x0 00:33:29.840 Command Id: 0x4 00:33:29.840 Phase Bit: 0 00:33:29.840 Status Code: 0x2 00:33:29.840 Status Code Type: 0x0 00:33:29.840 Do Not Retry: 1 00:33:29.840 Error Location: 0x28 00:33:29.840 LBA: 0x0 00:33:29.840 Namespace: 0x0 00:33:29.840 Vendor Log Page: 0x0 00:33:29.840 00:33:29.840 Number of Queues 00:33:29.840 ================ 00:33:29.840 Number of I/O Submission Queues: 128 00:33:29.840 Number of I/O Completion Queues: 128 00:33:29.840 00:33:29.840 ZNS Specific Controller Data 00:33:29.840 ============================ 00:33:29.840 Zone Append Size Limit: 0 00:33:29.840 00:33:29.840 00:33:29.840 Active Namespaces 00:33:29.840 ================= 00:33:29.840 get_feature(0x05) failed 00:33:29.840 Namespace ID:1 00:33:29.840 Command Set Identifier: NVM (00h) 00:33:29.840 Deallocate: Supported 00:33:29.840 Deallocated/Unwritten Error: Not Supported 00:33:29.840 Deallocated Read Value: Unknown 00:33:29.840 Deallocate in Write Zeroes: Not Supported 00:33:29.840 Deallocated Guard Field: 0xFFFF 00:33:29.840 Flush: Supported 00:33:29.840 Reservation: Not Supported 00:33:29.840 Namespace Sharing Capabilities: Multiple Controllers 00:33:29.841 Size (in LBAs): 1953525168 (931GiB) 00:33:29.841 Capacity (in LBAs): 1953525168 (931GiB) 00:33:29.841 Utilization (in LBAs): 1953525168 (931GiB) 00:33:29.841 UUID: a03f6f30-b706-483e-8faa-6e8e72a95245 00:33:29.841 Thin Provisioning: Not Supported 00:33:29.841 Per-NS Atomic Units: Yes 00:33:29.841 Atomic Boundary Size (Normal): 0 00:33:29.841 Atomic Boundary Size (PFail): 0 00:33:29.841 Atomic Boundary Offset: 0 00:33:29.841 NGUID/EUI64 Never Reused: No 00:33:29.841 ANA group ID: 1 00:33:29.841 Namespace Write Protected: No 00:33:29.841 Number of LBA Formats: 1 00:33:29.841 Current LBA Format: LBA Format #00 00:33:29.841 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:29.841 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:29.841 rmmod nvme_tcp 00:33:29.841 rmmod nvme_fabrics 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:29.841 03:14:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.377 03:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:32.377 03:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:32.377 03:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:32.377 03:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:33:32.377 03:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:32.377 03:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:32.377 03:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:32.377 03:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:32.377 03:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:32.377 03:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:32.377 03:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:33.316 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:33.316 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:33.316 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:33.316 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:33.316 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:33.316 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:33.316 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:33.316 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:33.316 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:33.316 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:33.316 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:33.316 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:33.316 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:33.316 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:33.316 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:33.316 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:34.256 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:34.515 00:33:34.515 real 0m9.919s 00:33:34.515 user 0m2.150s 00:33:34.515 sys 0m3.781s 00:33:34.515 03:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:34.515 03:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:34.515 ************************************ 00:33:34.515 END TEST nvmf_identify_kernel_target 00:33:34.515 ************************************ 00:33:34.515 03:14:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:34.515 03:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:34.515 03:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:34.515 03:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.515 ************************************ 00:33:34.515 START TEST nvmf_auth_host 00:33:34.515 ************************************ 00:33:34.515 03:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:34.515 * Looking for test storage... 00:33:34.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:34.515 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:34.515 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:33:34.515 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:34.515 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:34.515 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:34.515 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:34.515 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:34.515 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:34.515 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:34.515 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:34.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.516 --rc genhtml_branch_coverage=1 00:33:34.516 --rc genhtml_function_coverage=1 00:33:34.516 --rc genhtml_legend=1 00:33:34.516 --rc geninfo_all_blocks=1 00:33:34.516 --rc geninfo_unexecuted_blocks=1 00:33:34.516 00:33:34.516 ' 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:34.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.516 --rc genhtml_branch_coverage=1 00:33:34.516 --rc genhtml_function_coverage=1 00:33:34.516 --rc genhtml_legend=1 00:33:34.516 --rc geninfo_all_blocks=1 00:33:34.516 --rc geninfo_unexecuted_blocks=1 00:33:34.516 00:33:34.516 ' 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:34.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.516 --rc genhtml_branch_coverage=1 00:33:34.516 --rc genhtml_function_coverage=1 00:33:34.516 --rc genhtml_legend=1 00:33:34.516 --rc geninfo_all_blocks=1 00:33:34.516 --rc geninfo_unexecuted_blocks=1 00:33:34.516 00:33:34.516 ' 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:34.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.516 --rc genhtml_branch_coverage=1 00:33:34.516 --rc genhtml_function_coverage=1 00:33:34.516 --rc genhtml_legend=1 00:33:34.516 --rc geninfo_all_blocks=1 00:33:34.516 --rc geninfo_unexecuted_blocks=1 00:33:34.516 00:33:34.516 ' 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:34.516 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:34.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:34.775 03:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.679 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:36.680 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:36.680 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:36.680 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:36.680 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:36.680 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:36.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:36.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:33:36.939 00:33:36.939 --- 10.0.0.2 ping statistics --- 00:33:36.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.939 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:36.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:36.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:33:36.939 00:33:36.939 --- 10.0.0.1 ping statistics --- 00:33:36.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.939 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=383707 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 383707 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 383707 ']' 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:36.939 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6b507ce2048061ce58b8bfaffc981a1d 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.eXx 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6b507ce2048061ce58b8bfaffc981a1d 0 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6b507ce2048061ce58b8bfaffc981a1d 0 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6b507ce2048061ce58b8bfaffc981a1d 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.eXx 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.eXx 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.eXx 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=40a80310200e20d4cc4dcf26f9a84e44fe6a5d4ae448e3c517e366ece94216fb 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.B9E 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 40a80310200e20d4cc4dcf26f9a84e44fe6a5d4ae448e3c517e366ece94216fb 3 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 40a80310200e20d4cc4dcf26f9a84e44fe6a5d4ae448e3c517e366ece94216fb 3 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=40a80310200e20d4cc4dcf26f9a84e44fe6a5d4ae448e3c517e366ece94216fb 00:33:37.198 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:33:37.199 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:37.199 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.B9E 00:33:37.199 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.B9E 00:33:37.199 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.B9E 00:33:37.199 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:37.199 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:37.199 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:37.199 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:37.199 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:37.199 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:37.457 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:37.457 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=11611fd2a6bc7983454c69e40eb0b0616c4c680ec7164345 00:33:37.457 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:37.457 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.pq4 00:33:37.457 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 11611fd2a6bc7983454c69e40eb0b0616c4c680ec7164345 0 00:33:37.457 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 11611fd2a6bc7983454c69e40eb0b0616c4c680ec7164345 0 00:33:37.457 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:37.457 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:37.457 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=11611fd2a6bc7983454c69e40eb0b0616c4c680ec7164345 00:33:37.457 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:37.457 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:37.457 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.pq4 00:33:37.457 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.pq4 00:33:37.457 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.pq4 00:33:37.457 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9e47ff7d74e471d754ddd7a1b80079772e3e4506c6cb7195 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4SM 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9e47ff7d74e471d754ddd7a1b80079772e3e4506c6cb7195 2 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9e47ff7d74e471d754ddd7a1b80079772e3e4506c6cb7195 2 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9e47ff7d74e471d754ddd7a1b80079772e3e4506c6cb7195 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4SM 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4SM 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.4SM 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e263b1cd0587ee59a4d05e979756e213 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.z0q 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e263b1cd0587ee59a4d05e979756e213 1 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e263b1cd0587ee59a4d05e979756e213 1 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e263b1cd0587ee59a4d05e979756e213 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.z0q 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.z0q 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.z0q 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a9c56235b65bc0f4d4686d3e9f11ab7b 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Lxf 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a9c56235b65bc0f4d4686d3e9f11ab7b 1 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a9c56235b65bc0f4d4686d3e9f11ab7b 1 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a9c56235b65bc0f4d4686d3e9f11ab7b 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Lxf 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Lxf 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Lxf 00:33:37.458 03:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3f5445ab339ee253763b467474c7b3a512caab99c4e0c74f 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.2yg 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3f5445ab339ee253763b467474c7b3a512caab99c4e0c74f 2 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3f5445ab339ee253763b467474c7b3a512caab99c4e0c74f 2 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3f5445ab339ee253763b467474c7b3a512caab99c4e0c74f 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.2yg 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.2yg 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.2yg 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c2df4eab73ebafbe6c63346af285a11e 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vyH 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c2df4eab73ebafbe6c63346af285a11e 0 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c2df4eab73ebafbe6c63346af285a11e 0 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c2df4eab73ebafbe6c63346af285a11e 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:37.458 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vyH 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vyH 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.vyH 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=98c426dda6faeea674b424b2ed3aa5c2c8667685c098317343fb1cb30f83c688 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.pvw 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 98c426dda6faeea674b424b2ed3aa5c2c8667685c098317343fb1cb30f83c688 3 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 98c426dda6faeea674b424b2ed3aa5c2c8667685c098317343fb1cb30f83c688 3 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=98c426dda6faeea674b424b2ed3aa5c2c8667685c098317343fb1cb30f83c688 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.pvw 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.pvw 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.pvw 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 383707 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 383707 ']' 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:37.717 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.eXx 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.B9E ]] 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.B9E 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.pq4 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.4SM ]] 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4SM 00:33:37.976 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.z0q 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Lxf ]] 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lxf 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.2yg 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.vyH ]] 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.vyH 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.pvw 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:37.977 03:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:39.379 Waiting for block devices as requested 00:33:39.379 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:39.379 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:39.379 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:39.379 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:39.638 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:39.638 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:39.638 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:39.638 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:39.896 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:39.897 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:39.897 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:39.897 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:40.155 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:40.155 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:40.155 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:40.155 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:40.413 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:40.670 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:40.670 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:40.670 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:40.670 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:40.670 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:40.670 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:40.670 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:40.670 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:40.670 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:40.670 No valid GPT data, bailing 00:33:40.670 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:40.670 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:33:40.670 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:33:40.670 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:40.670 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:40.670 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:40.670 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:40.929 00:33:40.929 Discovery Log Number of Records 2, Generation counter 2 00:33:40.929 =====Discovery Log Entry 0====== 00:33:40.929 trtype: tcp 00:33:40.929 adrfam: ipv4 00:33:40.929 subtype: current discovery subsystem 00:33:40.929 treq: not specified, sq flow control disable supported 00:33:40.929 portid: 1 00:33:40.929 trsvcid: 4420 00:33:40.929 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:40.929 traddr: 10.0.0.1 00:33:40.929 eflags: none 00:33:40.929 sectype: none 00:33:40.929 =====Discovery Log Entry 1====== 00:33:40.929 trtype: tcp 00:33:40.929 adrfam: ipv4 00:33:40.929 subtype: nvme subsystem 00:33:40.929 treq: not specified, sq flow control disable supported 00:33:40.929 portid: 1 00:33:40.929 trsvcid: 4420 00:33:40.929 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:40.929 traddr: 10.0.0.1 00:33:40.929 eflags: none 00:33:40.929 sectype: none 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:40.929 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.930 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.188 nvme0n1 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: ]] 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.189 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.448 nvme0n1 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.448 03:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.707 nvme0n1 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: ]] 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:41.707 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:41.708 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:41.708 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:41.708 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.708 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.966 nvme0n1 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: ]] 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.966 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.224 nvme0n1 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:42.224 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.225 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.225 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:42.225 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:42.225 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:42.225 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:42.225 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:42.225 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:42.225 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.225 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.483 nvme0n1 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:42.483 03:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: ]] 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.741 nvme0n1 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:42.741 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.000 nvme0n1 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.000 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: ]] 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.259 nvme0n1 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.259 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: ]] 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.539 03:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.539 nvme0n1 00:33:43.539 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.539 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.539 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.539 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.539 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.539 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:43.853 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.854 nvme0n1 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.854 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:44.470 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:44.470 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: ]] 00:33:44.470 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:44.470 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:44.470 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.470 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.470 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:44.470 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:44.470 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.470 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:44.470 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.470 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.470 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.470 03:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.470 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:44.470 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:44.470 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:44.470 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.470 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.470 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:44.470 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.470 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:44.470 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:44.470 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:44.470 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:44.470 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.470 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.744 nvme0n1 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.744 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.030 nvme0n1 00:33:45.030 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.030 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.030 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.030 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.030 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.030 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: ]] 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.323 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.583 nvme0n1 00:33:45.583 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.583 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.583 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.583 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.583 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.583 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.583 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.583 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.583 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.583 03:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: ]] 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.583 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.584 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.584 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.584 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:45.584 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:45.584 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:45.584 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.584 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.584 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:45.584 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.584 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:45.584 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:45.584 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:45.584 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:45.584 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.584 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.842 nvme0n1 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.842 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.101 nvme0n1 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.101 03:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: ]] 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.003 03:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.570 nvme0n1 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:48.570 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.571 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.571 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.571 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:48.571 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:48.571 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:48.571 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:48.571 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:48.571 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:48.571 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:48.571 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:48.571 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:48.571 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:48.571 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:48.571 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:48.571 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.571 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.137 nvme0n1 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: ]] 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.137 03:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.702 nvme0n1 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: ]] 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:33:49.702 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.703 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.268 nvme0n1 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.268 03:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.836 nvme0n1 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: ]] 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.836 03:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.770 nvme0n1 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.770 03:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.706 nvme0n1 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: ]] 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.706 03:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.641 nvme0n1 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: ]] 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.641 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.577 nvme0n1 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:54.577 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:54.578 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:54.578 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.578 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:54.578 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.578 03:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.578 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.578 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.578 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:54.578 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:54.578 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:54.578 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.578 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.578 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:54.578 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.578 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:54.578 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:54.578 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:54.578 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:54.578 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.578 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.511 nvme0n1 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: ]] 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.511 03:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.511 nvme0n1 00:33:55.511 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.511 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.511 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.511 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.511 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.511 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.511 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.511 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.511 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.511 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.511 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.511 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.511 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:55.511 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.511 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:55.511 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.771 nvme0n1 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: ]] 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.771 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.030 nvme0n1 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: ]] 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.030 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:56.031 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.031 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.031 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.031 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.031 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.031 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.031 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.031 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.031 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.031 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.031 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.031 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.031 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.031 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.031 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:56.031 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.031 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.289 nvme0n1 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.289 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.548 nvme0n1 00:33:56.548 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.548 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.548 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.548 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.548 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.548 03:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: ]] 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.548 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.807 nvme0n1 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.807 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.066 nvme0n1 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: ]] 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.066 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.324 nvme0n1 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: ]] 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.324 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:57.325 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.325 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.325 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.325 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.325 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.325 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.325 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.325 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.325 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.325 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.325 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.325 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.325 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.325 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.325 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:57.325 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.325 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.583 nvme0n1 00:33:57.583 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.583 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.583 03:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:57.583 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.584 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.842 nvme0n1 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: ]] 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:57.842 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.843 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.101 nvme0n1 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.101 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.360 nvme0n1 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: ]] 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.360 03:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.619 nvme0n1 00:33:58.619 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.619 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.619 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.619 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.619 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.619 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: ]] 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.876 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.134 nvme0n1 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:59.134 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:59.135 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:59.135 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.135 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.135 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:59.135 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.135 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:59.135 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:59.135 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:59.135 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:59.135 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.135 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.394 nvme0n1 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: ]] 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.394 03:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.961 nvme0n1 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:59.961 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.962 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.529 nvme0n1 00:34:00.529 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.529 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.529 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.529 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.529 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.529 03:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: ]] 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:00.529 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:00.530 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:00.530 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.530 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.096 nvme0n1 00:34:01.096 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.096 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.096 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.096 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.096 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.096 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.096 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.096 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.096 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.096 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.096 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.096 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.096 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: ]] 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.097 03:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.664 nvme0n1 00:34:01.664 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.664 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.664 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.664 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.664 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.664 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.664 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.664 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.664 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.664 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.664 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.664 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.664 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:01.664 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.664 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:01.664 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:01.664 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.665 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.360 nvme0n1 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:34:02.360 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: ]] 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.361 03:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.927 nvme0n1 00:34:02.927 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.927 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.927 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.927 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.927 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.185 03:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.118 nvme0n1 00:34:04.118 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.118 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.118 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: ]] 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.119 03:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.052 nvme0n1 00:34:05.052 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.052 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.052 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.052 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.052 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.052 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.052 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.052 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.052 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.052 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.052 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.052 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.052 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:05.052 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: ]] 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.053 03:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.618 nvme0n1 00:34:05.618 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.618 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.618 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.618 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.618 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.618 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:05.876 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.877 03:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.811 nvme0n1 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: ]] 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.811 nvme0n1 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.811 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.812 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.071 nvme0n1 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: ]] 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.071 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.329 nvme0n1 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: ]] 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.330 03:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.588 nvme0n1 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:07.588 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.589 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.847 nvme0n1 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: ]] 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.847 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.107 nvme0n1 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.107 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.366 nvme0n1 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: ]] 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.366 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.367 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.367 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.367 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.367 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.367 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.367 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.367 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.367 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.367 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.367 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.367 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.367 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.367 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:08.367 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.367 03:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.625 nvme0n1 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: ]] 00:34:08.625 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.626 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.884 nvme0n1 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.884 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.142 nvme0n1 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: ]] 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.142 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.143 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.143 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.143 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.143 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.143 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.143 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.143 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:09.143 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.143 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.401 nvme0n1 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:34:09.401 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.402 03:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.660 nvme0n1 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: ]] 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.660 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.918 nvme0n1 00:34:09.918 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.918 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.918 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.918 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.918 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.918 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: ]] 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.176 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.177 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.177 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.177 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.177 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.177 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.177 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.177 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.177 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.177 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.177 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:10.177 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.177 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.435 nvme0n1 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:10.435 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.436 03:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.694 nvme0n1 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: ]] 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.694 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.261 nvme0n1 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:11.261 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.262 03:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.828 nvme0n1 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: ]] 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.828 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.829 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.829 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.829 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.829 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.829 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.829 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:11.829 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.829 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.396 nvme0n1 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: ]] 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.396 03:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.961 nvme0n1 00:34:12.961 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.961 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.961 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.961 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.961 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.961 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.961 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.961 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.962 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.528 nvme0n1 00:34:13.528 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.528 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.528 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.528 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.528 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.528 03:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmI1MDdjZTIwNDgwNjFjZTU4YjhiZmFmZmM5ODFhMWS3XqKC: 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: ]] 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBhODAzMTAyMDBlMjBkNGNjNGRjZjI2ZjlhODRlNDRmZTZhNWQ0YWU0NDhlM2M1MTdlMzY2ZWNlOTQyMTZmYo6fLa8=: 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.528 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.529 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:13.529 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.529 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.465 nvme0n1 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.465 03:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.400 nvme0n1 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: ]] 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.400 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.401 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.401 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.401 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.401 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.401 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:15.401 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.401 03:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.337 nvme0n1 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1NDQ1YWIzMzllZTI1Mzc2M2I0Njc0NzRjN2IzYTUxMmNhYWI5OWM0ZTBjNzRm5uDVVg==: 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: ]] 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzJkZjRlYWI3M2ViYWZiZTZjNjMzNDZhZjI4NWExMWUofi31: 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.337 03:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.272 nvme0n1 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OThjNDI2ZGRhNmZhZWVhNjc0YjQyNGIyZWQzYWE1YzJjODY2NzY4NWMwOTgzMTczNDNmYjFjYjMwZjgzYzY4OE7Sd94=: 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.272 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.273 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.273 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.273 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.273 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.273 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.273 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:17.273 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.273 03:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.208 nvme0n1 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:18.208 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.209 request: 00:34:18.209 { 00:34:18.209 "name": "nvme0", 00:34:18.209 "trtype": "tcp", 00:34:18.209 "traddr": "10.0.0.1", 00:34:18.209 "adrfam": "ipv4", 00:34:18.209 "trsvcid": "4420", 00:34:18.209 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:18.209 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:18.209 "prchk_reftag": false, 00:34:18.209 "prchk_guard": false, 00:34:18.209 "hdgst": false, 00:34:18.209 "ddgst": false, 00:34:18.209 "allow_unrecognized_csi": false, 00:34:18.209 "method": "bdev_nvme_attach_controller", 00:34:18.209 "req_id": 1 00:34:18.209 } 00:34:18.209 Got JSON-RPC error response 00:34:18.209 response: 00:34:18.209 { 00:34:18.209 "code": -5, 00:34:18.209 "message": "Input/output error" 00:34:18.209 } 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.209 request: 00:34:18.209 { 00:34:18.209 "name": "nvme0", 00:34:18.209 "trtype": "tcp", 00:34:18.209 "traddr": "10.0.0.1", 00:34:18.209 "adrfam": "ipv4", 00:34:18.209 "trsvcid": "4420", 00:34:18.209 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:18.209 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:18.209 "prchk_reftag": false, 00:34:18.209 "prchk_guard": false, 00:34:18.209 "hdgst": false, 00:34:18.209 "ddgst": false, 00:34:18.209 "dhchap_key": "key2", 00:34:18.209 "allow_unrecognized_csi": false, 00:34:18.209 "method": "bdev_nvme_attach_controller", 00:34:18.209 "req_id": 1 00:34:18.209 } 00:34:18.209 Got JSON-RPC error response 00:34:18.209 response: 00:34:18.209 { 00:34:18.209 "code": -5, 00:34:18.209 "message": "Input/output error" 00:34:18.209 } 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:18.209 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.210 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:18.210 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.210 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.467 request: 00:34:18.467 { 00:34:18.467 "name": "nvme0", 00:34:18.467 "trtype": "tcp", 00:34:18.467 "traddr": "10.0.0.1", 00:34:18.467 "adrfam": "ipv4", 00:34:18.467 "trsvcid": "4420", 00:34:18.467 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:18.467 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:18.467 "prchk_reftag": false, 00:34:18.467 "prchk_guard": false, 00:34:18.467 "hdgst": false, 00:34:18.467 "ddgst": false, 00:34:18.467 "dhchap_key": "key1", 00:34:18.467 "dhchap_ctrlr_key": "ckey2", 00:34:18.467 "allow_unrecognized_csi": false, 00:34:18.467 "method": "bdev_nvme_attach_controller", 00:34:18.467 "req_id": 1 00:34:18.467 } 00:34:18.467 Got JSON-RPC error response 00:34:18.467 response: 00:34:18.467 { 00:34:18.467 "code": -5, 00:34:18.467 "message": "Input/output error" 00:34:18.467 } 00:34:18.467 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:18.467 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:18.467 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:18.467 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:18.467 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:18.467 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:18.467 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.468 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.468 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.468 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.468 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.468 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.468 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.468 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.468 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.468 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.468 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:18.468 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.468 03:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.468 nvme0n1 00:34:18.468 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.468 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:18.468 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.468 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.468 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:18.468 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:18.468 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:18.468 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:18.468 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.468 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:18.468 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:18.468 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: ]] 00:34:18.468 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:18.468 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:18.468 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.468 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.739 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.739 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.739 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.739 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:18.739 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.739 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.739 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.739 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:18.739 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:18.739 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:18.739 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:18.739 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.739 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:18.739 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.739 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:18.739 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.739 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.739 request: 00:34:18.739 { 00:34:18.739 "name": "nvme0", 00:34:18.740 "dhchap_key": "key1", 00:34:18.740 "dhchap_ctrlr_key": "ckey2", 00:34:18.740 "method": "bdev_nvme_set_keys", 00:34:18.740 "req_id": 1 00:34:18.740 } 00:34:18.740 Got JSON-RPC error response 00:34:18.740 response: 00:34:18.740 { 00:34:18.740 "code": -13, 00:34:18.740 "message": "Permission denied" 00:34:18.740 } 00:34:18.740 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:18.740 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:18.740 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:18.740 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:18.740 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:18.740 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.740 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.740 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:18.740 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.740 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.740 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:18.740 03:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:19.677 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.677 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:19.677 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.677 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.677 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MTFmZDJhNmJjNzk4MzQ1NGM2OWU0MGViMGIwNjE2YzRjNjgwZWM3MTY0MzQ1azU7Eg==: 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: ]] 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWU0N2ZmN2Q3NGU0NzFkNzU0ZGRkN2ExYjgwMDc5NzcyZTNlNDUwNmM2Y2I3MTk18v5jCA==: 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.935 nvme0n1 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTI2M2IxY2QwNTg3ZWU1OWE0ZDA1ZTk3OTc1NmUyMTOARcMX: 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: ]] 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljNTYyMzViNjViYzBmNGQ0Njg2ZDNlOWYxMWFiN2INEIl9: 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:19.935 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:19.936 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:19.936 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.936 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.936 request: 00:34:19.936 { 00:34:19.936 "name": "nvme0", 00:34:19.936 "dhchap_key": "key2", 00:34:19.936 "dhchap_ctrlr_key": "ckey1", 00:34:19.936 "method": "bdev_nvme_set_keys", 00:34:19.936 "req_id": 1 00:34:19.936 } 00:34:19.936 Got JSON-RPC error response 00:34:19.936 response: 00:34:19.936 { 00:34:19.936 "code": -13, 00:34:19.936 "message": "Permission denied" 00:34:19.936 } 00:34:19.936 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:19.936 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:19.936 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:19.936 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:19.936 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:19.936 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.936 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:19.936 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.936 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.936 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.194 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:20.194 03:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:21.129 03:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.129 03:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:21.129 03:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.129 03:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.129 03:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.129 03:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:21.129 03:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:22.065 rmmod nvme_tcp 00:34:22.065 rmmod nvme_fabrics 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 383707 ']' 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 383707 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 383707 ']' 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 383707 00:34:22.065 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:34:22.324 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:22.324 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 383707 00:34:22.324 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:22.324 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:22.324 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 383707' 00:34:22.324 killing process with pid 383707 00:34:22.324 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 383707 00:34:22.325 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 383707 00:34:22.325 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:22.325 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:22.325 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:22.325 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:22.325 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:34:22.325 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:22.325 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:22.325 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:22.325 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:22.325 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.325 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:22.325 03:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.862 03:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:24.862 03:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:24.862 03:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:24.862 03:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:24.862 03:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:24.862 03:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:34:24.862 03:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:24.862 03:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:24.862 03:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:24.862 03:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:24.862 03:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:24.862 03:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:24.862 03:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:25.801 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:25.801 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:25.801 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:25.801 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:25.801 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:25.801 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:25.801 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:25.801 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:25.801 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:25.801 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:25.801 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:25.801 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:25.801 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:25.801 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:25.801 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:25.801 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:26.740 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:26.740 03:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.eXx /tmp/spdk.key-null.pq4 /tmp/spdk.key-sha256.z0q /tmp/spdk.key-sha384.2yg /tmp/spdk.key-sha512.pvw /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:26.740 03:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:27.675 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:27.675 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:27.675 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:27.675 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:27.675 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:27.675 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:27.675 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:27.675 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:27.675 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:27.675 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:27.675 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:27.675 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:27.675 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:27.675 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:27.675 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:27.675 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:27.675 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:27.935 00:34:27.935 real 0m53.493s 00:34:27.935 user 0m50.866s 00:34:27.935 sys 0m6.053s 00:34:27.935 03:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:27.935 03:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.935 ************************************ 00:34:27.935 END TEST nvmf_auth_host 00:34:27.935 ************************************ 00:34:27.935 03:15:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:27.935 03:15:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:27.935 03:15:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:27.935 03:15:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:27.935 03:15:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.935 ************************************ 00:34:27.935 START TEST nvmf_digest 00:34:27.935 ************************************ 00:34:27.935 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:28.195 * Looking for test storage... 00:34:28.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:28.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.195 --rc genhtml_branch_coverage=1 00:34:28.195 --rc genhtml_function_coverage=1 00:34:28.195 --rc genhtml_legend=1 00:34:28.195 --rc geninfo_all_blocks=1 00:34:28.195 --rc geninfo_unexecuted_blocks=1 00:34:28.195 00:34:28.195 ' 00:34:28.195 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:28.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.196 --rc genhtml_branch_coverage=1 00:34:28.196 --rc genhtml_function_coverage=1 00:34:28.196 --rc genhtml_legend=1 00:34:28.196 --rc geninfo_all_blocks=1 00:34:28.196 --rc geninfo_unexecuted_blocks=1 00:34:28.196 00:34:28.196 ' 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:28.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.196 --rc genhtml_branch_coverage=1 00:34:28.196 --rc genhtml_function_coverage=1 00:34:28.196 --rc genhtml_legend=1 00:34:28.196 --rc geninfo_all_blocks=1 00:34:28.196 --rc geninfo_unexecuted_blocks=1 00:34:28.196 00:34:28.196 ' 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:28.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.196 --rc genhtml_branch_coverage=1 00:34:28.196 --rc genhtml_function_coverage=1 00:34:28.196 --rc genhtml_legend=1 00:34:28.196 --rc geninfo_all_blocks=1 00:34:28.196 --rc geninfo_unexecuted_blocks=1 00:34:28.196 00:34:28.196 ' 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:28.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:28.196 03:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:30.739 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:30.739 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:30.739 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:30.739 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:30.739 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:30.739 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:30.739 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:30.739 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:30.739 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:30.739 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:30.739 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:30.739 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:30.739 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:30.739 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:30.739 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:30.739 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:30.739 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:30.740 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:30.740 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:30.740 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:30.740 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:30.740 03:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:30.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:30.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:34:30.740 00:34:30.740 --- 10.0.0.2 ping statistics --- 00:34:30.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.740 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:30.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:30.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:34:30.740 00:34:30.740 --- 10.0.0.1 ping statistics --- 00:34:30.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.740 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:30.740 ************************************ 00:34:30.740 START TEST nvmf_digest_clean 00:34:30.740 ************************************ 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:30.740 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:30.741 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:30.741 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:30.741 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:30.741 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=394210 00:34:30.741 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:30.741 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 394210 00:34:30.741 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 394210 ']' 00:34:30.741 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:30.741 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:30.741 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:30.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:30.741 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:30.741 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:30.741 [2024-11-19 03:15:41.192434] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:34:30.741 [2024-11-19 03:15:41.192534] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:30.741 [2024-11-19 03:15:41.264199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.741 [2024-11-19 03:15:41.307014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:30.741 [2024-11-19 03:15:41.307094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:30.741 [2024-11-19 03:15:41.307119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:30.741 [2024-11-19 03:15:41.307129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:30.741 [2024-11-19 03:15:41.307138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:30.741 [2024-11-19 03:15:41.307728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:30.999 null0 00:34:30.999 [2024-11-19 03:15:41.556952] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:30.999 [2024-11-19 03:15:41.581196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=394238 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 394238 /var/tmp/bperf.sock 00:34:30.999 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 394238 ']' 00:34:31.000 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:31.000 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:31.000 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:31.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:31.000 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:31.000 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:31.000 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:31.258 [2024-11-19 03:15:41.632008] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:34:31.258 [2024-11-19 03:15:41.632095] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394238 ] 00:34:31.258 [2024-11-19 03:15:41.702600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:31.258 [2024-11-19 03:15:41.749989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:31.258 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:31.258 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:31.258 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:31.258 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:31.258 03:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:31.824 03:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:31.824 03:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:32.390 nvme0n1 00:34:32.390 03:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:32.390 03:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:32.390 Running I/O for 2 seconds... 00:34:34.695 18481.00 IOPS, 72.19 MiB/s [2024-11-19T02:15:45.310Z] 18493.50 IOPS, 72.24 MiB/s 00:34:34.695 Latency(us) 00:34:34.695 [2024-11-19T02:15:45.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:34.695 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:34.695 nvme0n1 : 2.01 18483.71 72.20 0.00 0.00 6917.16 3398.16 19223.89 00:34:34.695 [2024-11-19T02:15:45.310Z] =================================================================================================================== 00:34:34.695 [2024-11-19T02:15:45.310Z] Total : 18483.71 72.20 0.00 0.00 6917.16 3398.16 19223.89 00:34:34.695 { 00:34:34.695 "results": [ 00:34:34.695 { 00:34:34.695 "job": "nvme0n1", 00:34:34.695 "core_mask": "0x2", 00:34:34.695 "workload": "randread", 00:34:34.695 "status": "finished", 00:34:34.695 "queue_depth": 128, 00:34:34.695 "io_size": 4096, 00:34:34.695 "runtime": 2.007984, 00:34:34.695 "iops": 18483.713017633607, 00:34:34.695 "mibps": 72.20200397513128, 00:34:34.695 "io_failed": 0, 00:34:34.695 "io_timeout": 0, 00:34:34.695 "avg_latency_us": 6917.1637402467795, 00:34:34.695 "min_latency_us": 3398.162962962963, 00:34:34.695 "max_latency_us": 19223.893333333333 00:34:34.695 } 00:34:34.695 ], 00:34:34.695 "core_count": 1 00:34:34.695 } 00:34:34.695 03:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:34.695 03:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:34.695 03:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:34.696 03:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:34.696 03:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:34.696 | select(.opcode=="crc32c") 00:34:34.696 | "\(.module_name) \(.executed)"' 00:34:34.696 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:34.696 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:34.696 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:34.696 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:34.696 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 394238 00:34:34.696 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 394238 ']' 00:34:34.696 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 394238 00:34:34.696 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:34.696 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:34.696 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394238 00:34:34.696 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:34.696 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:34.696 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394238' 00:34:34.696 killing process with pid 394238 00:34:34.696 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 394238 00:34:34.696 Received shutdown signal, test time was about 2.000000 seconds 00:34:34.696 00:34:34.696 Latency(us) 00:34:34.696 [2024-11-19T02:15:45.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:34.696 [2024-11-19T02:15:45.311Z] =================================================================================================================== 00:34:34.696 [2024-11-19T02:15:45.311Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:34.696 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 394238 00:34:34.954 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:34.954 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:34.954 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:34.954 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:34.954 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:34.954 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:34.954 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:34.954 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=394759 00:34:34.954 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:34.954 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 394759 /var/tmp/bperf.sock 00:34:34.954 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 394759 ']' 00:34:34.954 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:34.954 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:34.954 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:34.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:34.954 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:34.954 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:34.954 [2024-11-19 03:15:45.490124] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:34:34.954 [2024-11-19 03:15:45.490217] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394759 ] 00:34:34.954 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:34.954 Zero copy mechanism will not be used. 00:34:34.954 [2024-11-19 03:15:45.555582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.213 [2024-11-19 03:15:45.599038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.213 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:35.213 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:35.213 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:35.213 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:35.213 03:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:35.472 03:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:35.472 03:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:36.038 nvme0n1 00:34:36.038 03:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:36.038 03:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:36.038 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:36.038 Zero copy mechanism will not be used. 00:34:36.038 Running I/O for 2 seconds... 00:34:37.980 4917.00 IOPS, 614.62 MiB/s [2024-11-19T02:15:48.854Z] 5008.50 IOPS, 626.06 MiB/s 00:34:38.239 Latency(us) 00:34:38.239 [2024-11-19T02:15:48.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:38.239 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:38.239 nvme0n1 : 2.00 5007.31 625.91 0.00 0.00 3191.19 813.13 9369.22 00:34:38.239 [2024-11-19T02:15:48.854Z] =================================================================================================================== 00:34:38.239 [2024-11-19T02:15:48.854Z] Total : 5007.31 625.91 0.00 0.00 3191.19 813.13 9369.22 00:34:38.239 { 00:34:38.239 "results": [ 00:34:38.239 { 00:34:38.239 "job": "nvme0n1", 00:34:38.239 "core_mask": "0x2", 00:34:38.239 "workload": "randread", 00:34:38.239 "status": "finished", 00:34:38.239 "queue_depth": 16, 00:34:38.239 "io_size": 131072, 00:34:38.239 "runtime": 2.003672, 00:34:38.239 "iops": 5007.306585109738, 00:34:38.239 "mibps": 625.9133231387173, 00:34:38.239 "io_failed": 0, 00:34:38.239 "io_timeout": 0, 00:34:38.239 "avg_latency_us": 3191.193023909986, 00:34:38.239 "min_latency_us": 813.1318518518518, 00:34:38.239 "max_latency_us": 9369.22074074074 00:34:38.239 } 00:34:38.239 ], 00:34:38.239 "core_count": 1 00:34:38.239 } 00:34:38.239 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:38.239 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:38.239 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:38.239 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:38.239 | select(.opcode=="crc32c") 00:34:38.239 | "\(.module_name) \(.executed)"' 00:34:38.239 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:38.498 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:38.498 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:38.498 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:38.498 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:38.498 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 394759 00:34:38.498 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 394759 ']' 00:34:38.498 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 394759 00:34:38.498 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:38.498 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:38.498 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394759 00:34:38.498 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:38.498 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:38.498 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394759' 00:34:38.498 killing process with pid 394759 00:34:38.498 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 394759 00:34:38.498 Received shutdown signal, test time was about 2.000000 seconds 00:34:38.498 00:34:38.498 Latency(us) 00:34:38.498 [2024-11-19T02:15:49.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:38.498 [2024-11-19T02:15:49.113Z] =================================================================================================================== 00:34:38.498 [2024-11-19T02:15:49.113Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:38.498 03:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 394759 00:34:38.759 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:38.759 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:38.759 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:38.759 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:38.759 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:38.759 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:38.759 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:38.759 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=395164 00:34:38.759 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:38.759 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 395164 /var/tmp/bperf.sock 00:34:38.759 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 395164 ']' 00:34:38.759 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:38.759 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:38.759 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:38.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:38.759 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:38.759 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:38.759 [2024-11-19 03:15:49.169657] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:34:38.759 [2024-11-19 03:15:49.169794] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395164 ] 00:34:38.759 [2024-11-19 03:15:49.236791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:38.759 [2024-11-19 03:15:49.282877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:39.016 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:39.016 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:39.016 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:39.016 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:39.016 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:39.275 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:39.275 03:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:39.843 nvme0n1 00:34:39.843 03:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:39.843 03:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:39.843 Running I/O for 2 seconds... 00:34:42.153 20355.00 IOPS, 79.51 MiB/s [2024-11-19T02:15:52.768Z] 19377.50 IOPS, 75.69 MiB/s 00:34:42.153 Latency(us) 00:34:42.153 [2024-11-19T02:15:52.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.153 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:42.153 nvme0n1 : 2.01 19377.74 75.69 0.00 0.00 6591.08 2621.44 9126.49 00:34:42.153 [2024-11-19T02:15:52.768Z] =================================================================================================================== 00:34:42.153 [2024-11-19T02:15:52.768Z] Total : 19377.74 75.69 0.00 0.00 6591.08 2621.44 9126.49 00:34:42.153 { 00:34:42.153 "results": [ 00:34:42.153 { 00:34:42.153 "job": "nvme0n1", 00:34:42.153 "core_mask": "0x2", 00:34:42.153 "workload": "randwrite", 00:34:42.153 "status": "finished", 00:34:42.153 "queue_depth": 128, 00:34:42.153 "io_size": 4096, 00:34:42.153 "runtime": 2.006581, 00:34:42.153 "iops": 19377.737554576666, 00:34:42.153 "mibps": 75.6942873225651, 00:34:42.153 "io_failed": 0, 00:34:42.153 "io_timeout": 0, 00:34:42.153 "avg_latency_us": 6591.077443536688, 00:34:42.153 "min_latency_us": 2621.44, 00:34:42.153 "max_latency_us": 9126.494814814814 00:34:42.153 } 00:34:42.153 ], 00:34:42.153 "core_count": 1 00:34:42.153 } 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:42.153 | select(.opcode=="crc32c") 00:34:42.153 | "\(.module_name) \(.executed)"' 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 395164 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 395164 ']' 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 395164 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395164 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395164' 00:34:42.153 killing process with pid 395164 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 395164 00:34:42.153 Received shutdown signal, test time was about 2.000000 seconds 00:34:42.153 00:34:42.153 Latency(us) 00:34:42.153 [2024-11-19T02:15:52.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.153 [2024-11-19T02:15:52.768Z] =================================================================================================================== 00:34:42.153 [2024-11-19T02:15:52.768Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:42.153 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 395164 00:34:42.412 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:42.412 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:42.412 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:42.412 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:42.412 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:42.412 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:42.412 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:42.412 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=395579 00:34:42.412 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 395579 /var/tmp/bperf.sock 00:34:42.412 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:42.412 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 395579 ']' 00:34:42.412 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:42.412 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:42.412 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:42.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:42.412 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:42.412 03:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:42.412 [2024-11-19 03:15:52.961225] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:34:42.412 [2024-11-19 03:15:52.961328] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395579 ] 00:34:42.412 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:42.412 Zero copy mechanism will not be used. 00:34:42.412 [2024-11-19 03:15:53.028309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.671 [2024-11-19 03:15:53.074374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:42.671 03:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:42.671 03:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:42.671 03:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:42.671 03:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:42.671 03:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:43.239 03:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:43.239 03:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:43.498 nvme0n1 00:34:43.498 03:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:43.498 03:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:43.757 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:43.757 Zero copy mechanism will not be used. 00:34:43.757 Running I/O for 2 seconds... 00:34:45.625 5646.00 IOPS, 705.75 MiB/s [2024-11-19T02:15:56.240Z] 5971.50 IOPS, 746.44 MiB/s 00:34:45.625 Latency(us) 00:34:45.625 [2024-11-19T02:15:56.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.625 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:45.625 nvme0n1 : 2.00 5967.68 745.96 0.00 0.00 2673.65 1820.44 9611.95 00:34:45.625 [2024-11-19T02:15:56.240Z] =================================================================================================================== 00:34:45.625 [2024-11-19T02:15:56.240Z] Total : 5967.68 745.96 0.00 0.00 2673.65 1820.44 9611.95 00:34:45.625 { 00:34:45.625 "results": [ 00:34:45.625 { 00:34:45.625 "job": "nvme0n1", 00:34:45.625 "core_mask": "0x2", 00:34:45.625 "workload": "randwrite", 00:34:45.625 "status": "finished", 00:34:45.625 "queue_depth": 16, 00:34:45.625 "io_size": 131072, 00:34:45.625 "runtime": 2.004631, 00:34:45.625 "iops": 5967.6818327163455, 00:34:45.625 "mibps": 745.9602290895432, 00:34:45.625 "io_failed": 0, 00:34:45.625 "io_timeout": 0, 00:34:45.625 "avg_latency_us": 2673.651537673258, 00:34:45.625 "min_latency_us": 1820.4444444444443, 00:34:45.625 "max_latency_us": 9611.946666666667 00:34:45.625 } 00:34:45.625 ], 00:34:45.625 "core_count": 1 00:34:45.625 } 00:34:45.625 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:45.626 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:45.626 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:45.626 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:45.626 | select(.opcode=="crc32c") 00:34:45.626 | "\(.module_name) \(.executed)"' 00:34:45.626 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:45.884 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:45.884 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:45.884 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:45.884 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:45.884 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 395579 00:34:45.884 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 395579 ']' 00:34:45.884 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 395579 00:34:45.884 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:45.884 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:45.884 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395579 00:34:46.143 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:46.143 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:46.143 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395579' 00:34:46.143 killing process with pid 395579 00:34:46.143 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 395579 00:34:46.143 Received shutdown signal, test time was about 2.000000 seconds 00:34:46.143 00:34:46.143 Latency(us) 00:34:46.143 [2024-11-19T02:15:56.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:46.143 [2024-11-19T02:15:56.758Z] =================================================================================================================== 00:34:46.143 [2024-11-19T02:15:56.758Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:46.143 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 395579 00:34:46.143 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 394210 00:34:46.143 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 394210 ']' 00:34:46.143 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 394210 00:34:46.143 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:46.143 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:46.143 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394210 00:34:46.404 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:46.404 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:46.404 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394210' 00:34:46.404 killing process with pid 394210 00:34:46.404 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 394210 00:34:46.404 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 394210 00:34:46.404 00:34:46.404 real 0m15.821s 00:34:46.404 user 0m31.611s 00:34:46.404 sys 0m4.335s 00:34:46.404 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:46.404 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:46.404 ************************************ 00:34:46.404 END TEST nvmf_digest_clean 00:34:46.404 ************************************ 00:34:46.404 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:34:46.404 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:46.404 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:46.404 03:15:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:46.404 ************************************ 00:34:46.404 START TEST nvmf_digest_error 00:34:46.404 ************************************ 00:34:46.404 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:34:46.404 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:34:46.404 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:46.404 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:46.404 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.665 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=396125 00:34:46.665 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:46.665 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 396125 00:34:46.665 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 396125 ']' 00:34:46.665 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:46.665 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:46.665 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:46.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:46.665 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:46.665 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.665 [2024-11-19 03:15:57.074104] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:34:46.665 [2024-11-19 03:15:57.074204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:46.665 [2024-11-19 03:15:57.146646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.665 [2024-11-19 03:15:57.189779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:46.665 [2024-11-19 03:15:57.189839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:46.665 [2024-11-19 03:15:57.189862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:46.665 [2024-11-19 03:15:57.189873] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:46.665 [2024-11-19 03:15:57.189883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:46.665 [2024-11-19 03:15:57.190453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.923 [2024-11-19 03:15:57.327220] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.923 null0 00:34:46.923 [2024-11-19 03:15:57.439833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:46.923 [2024-11-19 03:15:57.464060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=396160 00:34:46.923 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 396160 /var/tmp/bperf.sock 00:34:46.924 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 396160 ']' 00:34:46.924 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:46.924 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:46.924 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:46.924 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:46.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:46.924 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:46.924 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.924 [2024-11-19 03:15:57.513803] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:34:46.924 [2024-11-19 03:15:57.513892] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396160 ] 00:34:47.183 [2024-11-19 03:15:57.580645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.183 [2024-11-19 03:15:57.626684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:47.183 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:47.183 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:47.183 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:47.183 03:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:47.441 03:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:47.441 03:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.441 03:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:47.441 03:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.441 03:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:47.441 03:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:48.009 nvme0n1 00:34:48.009 03:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:48.009 03:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.009 03:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:48.009 03:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.009 03:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:48.009 03:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:48.009 Running I/O for 2 seconds... 00:34:48.009 [2024-11-19 03:15:58.491161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.009 [2024-11-19 03:15:58.491215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.009 [2024-11-19 03:15:58.491236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.009 [2024-11-19 03:15:58.507065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.009 [2024-11-19 03:15:58.507098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.009 [2024-11-19 03:15:58.507115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.009 [2024-11-19 03:15:58.521097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.009 [2024-11-19 03:15:58.521140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.009 [2024-11-19 03:15:58.521162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.009 [2024-11-19 03:15:58.532635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.009 [2024-11-19 03:15:58.532686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.009 [2024-11-19 03:15:58.532720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.009 [2024-11-19 03:15:58.546530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.009 [2024-11-19 03:15:58.546564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.009 [2024-11-19 03:15:58.546606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.009 [2024-11-19 03:15:58.559394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.009 [2024-11-19 03:15:58.559441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.009 [2024-11-19 03:15:58.559459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.009 [2024-11-19 03:15:58.572373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.009 [2024-11-19 03:15:58.572404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.009 [2024-11-19 03:15:58.572421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.009 [2024-11-19 03:15:58.584357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.009 [2024-11-19 03:15:58.584391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.009 [2024-11-19 03:15:58.584409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.009 [2024-11-19 03:15:58.597120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.009 [2024-11-19 03:15:58.597151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.009 [2024-11-19 03:15:58.597167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.009 [2024-11-19 03:15:58.610724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.009 [2024-11-19 03:15:58.610757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.009 [2024-11-19 03:15:58.610776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.009 [2024-11-19 03:15:58.623655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.009 [2024-11-19 03:15:58.623697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.009 [2024-11-19 03:15:58.623719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.638428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.638458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.638475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.649275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.649331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.649353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.664462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.664497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.664514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.677604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.677646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.677685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.688572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.688602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.688619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.703174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.703203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.703220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.717057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.717088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.717104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.727918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.727949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.727966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.741821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.741853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.741870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.756378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.756410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.756427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.767437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.767468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.767484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.780855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.780887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.780904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.793574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.793641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.793667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.806716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.806746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.806763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.818914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.818944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.818961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.832067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.832097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.832113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.844942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.844973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.844990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.857596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.857626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.857642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.868620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.868651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.868682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.269 [2024-11-19 03:15:58.882244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.269 [2024-11-19 03:15:58.882299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.269 [2024-11-19 03:15:58.882325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.528 [2024-11-19 03:15:58.897862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.528 [2024-11-19 03:15:58.897894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.528 [2024-11-19 03:15:58.897910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.528 [2024-11-19 03:15:58.910225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.528 [2024-11-19 03:15:58.910255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.528 [2024-11-19 03:15:58.910272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.528 [2024-11-19 03:15:58.922560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.528 [2024-11-19 03:15:58.922590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.528 [2024-11-19 03:15:58.922607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.528 [2024-11-19 03:15:58.937105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.528 [2024-11-19 03:15:58.937137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.528 [2024-11-19 03:15:58.937154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.528 [2024-11-19 03:15:58.951196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.528 [2024-11-19 03:15:58.951227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.528 [2024-11-19 03:15:58.951244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.528 [2024-11-19 03:15:58.962400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.528 [2024-11-19 03:15:58.962431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.528 [2024-11-19 03:15:58.962447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.528 [2024-11-19 03:15:58.982962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.528 [2024-11-19 03:15:58.983006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.528 [2024-11-19 03:15:58.983023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.528 [2024-11-19 03:15:58.998466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.528 [2024-11-19 03:15:58.998498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.528 [2024-11-19 03:15:58.998516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.529 [2024-11-19 03:15:59.012646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.529 [2024-11-19 03:15:59.012713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.529 [2024-11-19 03:15:59.012735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.529 [2024-11-19 03:15:59.023577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.529 [2024-11-19 03:15:59.023607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.529 [2024-11-19 03:15:59.023624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.529 [2024-11-19 03:15:59.039938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.529 [2024-11-19 03:15:59.039969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.529 [2024-11-19 03:15:59.040000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.529 [2024-11-19 03:15:59.052896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.529 [2024-11-19 03:15:59.052927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.529 [2024-11-19 03:15:59.052944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.529 [2024-11-19 03:15:59.067073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.529 [2024-11-19 03:15:59.067105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.529 [2024-11-19 03:15:59.067122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.529 [2024-11-19 03:15:59.078148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.529 [2024-11-19 03:15:59.078178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.529 [2024-11-19 03:15:59.078194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.529 [2024-11-19 03:15:59.090114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.529 [2024-11-19 03:15:59.090160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.529 [2024-11-19 03:15:59.090177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.529 [2024-11-19 03:15:59.103579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.529 [2024-11-19 03:15:59.103609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.529 [2024-11-19 03:15:59.103625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.529 [2024-11-19 03:15:59.116071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.529 [2024-11-19 03:15:59.116101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.529 [2024-11-19 03:15:59.116123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.529 [2024-11-19 03:15:59.128388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.529 [2024-11-19 03:15:59.128419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.529 [2024-11-19 03:15:59.128436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.529 [2024-11-19 03:15:59.141504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.529 [2024-11-19 03:15:59.141537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.529 [2024-11-19 03:15:59.141570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.798 [2024-11-19 03:15:59.154489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.798 [2024-11-19 03:15:59.154522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.798 [2024-11-19 03:15:59.154540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.798 [2024-11-19 03:15:59.166908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.798 [2024-11-19 03:15:59.166942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.798 [2024-11-19 03:15:59.166960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.798 [2024-11-19 03:15:59.179210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.798 [2024-11-19 03:15:59.179257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.798 [2024-11-19 03:15:59.179284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.798 [2024-11-19 03:15:59.193438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.798 [2024-11-19 03:15:59.193469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.798 [2024-11-19 03:15:59.193486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.798 [2024-11-19 03:15:59.206859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.798 [2024-11-19 03:15:59.206896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.798 [2024-11-19 03:15:59.206922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.798 [2024-11-19 03:15:59.219278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.798 [2024-11-19 03:15:59.219310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.798 [2024-11-19 03:15:59.219327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.798 [2024-11-19 03:15:59.234552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.798 [2024-11-19 03:15:59.234588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.798 [2024-11-19 03:15:59.234605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.798 [2024-11-19 03:15:59.245752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.798 [2024-11-19 03:15:59.245783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.798 [2024-11-19 03:15:59.245800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.798 [2024-11-19 03:15:59.259239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.798 [2024-11-19 03:15:59.259280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.798 [2024-11-19 03:15:59.259322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.798 [2024-11-19 03:15:59.274822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.799 [2024-11-19 03:15:59.274854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.799 [2024-11-19 03:15:59.274872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.799 [2024-11-19 03:15:59.286876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.799 [2024-11-19 03:15:59.286907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.799 [2024-11-19 03:15:59.286924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.799 [2024-11-19 03:15:59.300520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.799 [2024-11-19 03:15:59.300569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.799 [2024-11-19 03:15:59.300587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.799 [2024-11-19 03:15:59.314462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.799 [2024-11-19 03:15:59.314513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.799 [2024-11-19 03:15:59.314532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.799 [2024-11-19 03:15:59.326885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.799 [2024-11-19 03:15:59.326916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.799 [2024-11-19 03:15:59.326933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.799 [2024-11-19 03:15:59.340862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.799 [2024-11-19 03:15:59.340895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.799 [2024-11-19 03:15:59.340912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.799 [2024-11-19 03:15:59.357378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.799 [2024-11-19 03:15:59.357408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.799 [2024-11-19 03:15:59.357424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.799 [2024-11-19 03:15:59.373305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.799 [2024-11-19 03:15:59.373336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.799 [2024-11-19 03:15:59.373352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.799 [2024-11-19 03:15:59.389302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.799 [2024-11-19 03:15:59.389334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.799 [2024-11-19 03:15:59.389351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.799 [2024-11-19 03:15:59.404074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:48.799 [2024-11-19 03:15:59.404113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.799 [2024-11-19 03:15:59.404144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.058 [2024-11-19 03:15:59.415751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.058 [2024-11-19 03:15:59.415786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.058 [2024-11-19 03:15:59.415805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.058 [2024-11-19 03:15:59.428368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.428399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.428417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 [2024-11-19 03:15:59.441972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.442005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.442038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 [2024-11-19 03:15:59.456149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.456183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.456203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 [2024-11-19 03:15:59.467584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.467615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.467638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 19006.00 IOPS, 74.24 MiB/s [2024-11-19T02:15:59.674Z] [2024-11-19 03:15:59.482861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.482896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.482914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 [2024-11-19 03:15:59.495520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.495552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.495569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 [2024-11-19 03:15:59.509537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.509569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.509586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 [2024-11-19 03:15:59.524194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.524226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.524242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 [2024-11-19 03:15:59.540492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.540524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.540541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 [2024-11-19 03:15:59.554319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.554351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.554388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 [2024-11-19 03:15:59.567017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.567050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.567068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 [2024-11-19 03:15:59.578521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.578552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.578569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 [2024-11-19 03:15:59.593274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.593326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.593356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 [2024-11-19 03:15:59.608050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.608098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.608116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 [2024-11-19 03:15:59.620194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.620243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.620263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 [2024-11-19 03:15:59.633119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.633152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.633169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 [2024-11-19 03:15:59.647244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.647301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.647325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 [2024-11-19 03:15:59.659009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.659042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.659060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.059 [2024-11-19 03:15:59.674523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.059 [2024-11-19 03:15:59.674574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.059 [2024-11-19 03:15:59.674592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.319 [2024-11-19 03:15:59.691480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.319 [2024-11-19 03:15:59.691511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.319 [2024-11-19 03:15:59.691529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.319 [2024-11-19 03:15:59.705791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.319 [2024-11-19 03:15:59.705832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.319 [2024-11-19 03:15:59.705851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.319 [2024-11-19 03:15:59.722713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.319 [2024-11-19 03:15:59.722746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.319 [2024-11-19 03:15:59.722762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.319 [2024-11-19 03:15:59.733284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.319 [2024-11-19 03:15:59.733316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.319 [2024-11-19 03:15:59.733333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.319 [2024-11-19 03:15:59.747765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.319 [2024-11-19 03:15:59.747799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.319 [2024-11-19 03:15:59.747818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.319 [2024-11-19 03:15:59.761112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.319 [2024-11-19 03:15:59.761145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.319 [2024-11-19 03:15:59.761162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.319 [2024-11-19 03:15:59.774321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.319 [2024-11-19 03:15:59.774353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.319 [2024-11-19 03:15:59.774371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.319 [2024-11-19 03:15:59.786284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.319 [2024-11-19 03:15:59.786315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.319 [2024-11-19 03:15:59.786332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.319 [2024-11-19 03:15:59.800520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.319 [2024-11-19 03:15:59.800553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.319 [2024-11-19 03:15:59.800572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.319 [2024-11-19 03:15:59.814653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.319 [2024-11-19 03:15:59.814686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.319 [2024-11-19 03:15:59.814714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.319 [2024-11-19 03:15:59.828330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.319 [2024-11-19 03:15:59.828375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.319 [2024-11-19 03:15:59.828415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.319 [2024-11-19 03:15:59.839143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.319 [2024-11-19 03:15:59.839175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.319 [2024-11-19 03:15:59.839191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.319 [2024-11-19 03:15:59.852093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.319 [2024-11-19 03:15:59.852125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.319 [2024-11-19 03:15:59.852142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.319 [2024-11-19 03:15:59.866126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.319 [2024-11-19 03:15:59.866157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.319 [2024-11-19 03:15:59.866175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.319 [2024-11-19 03:15:59.880852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.319 [2024-11-19 03:15:59.880893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.319 [2024-11-19 03:15:59.880922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.319 [2024-11-19 03:15:59.894985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.319 [2024-11-19 03:15:59.895028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.319 [2024-11-19 03:15:59.895060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.319 [2024-11-19 03:15:59.911035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.319 [2024-11-19 03:15:59.911069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.319 [2024-11-19 03:15:59.911088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.320 [2024-11-19 03:15:59.922370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.320 [2024-11-19 03:15:59.922401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.320 [2024-11-19 03:15:59.922418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.320 [2024-11-19 03:15:59.935950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.320 [2024-11-19 03:15:59.935985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.320 [2024-11-19 03:15:59.936004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.578 [2024-11-19 03:15:59.950051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.578 [2024-11-19 03:15:59.950086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.578 [2024-11-19 03:15:59.950105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.578 [2024-11-19 03:15:59.962355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.578 [2024-11-19 03:15:59.962388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.578 [2024-11-19 03:15:59.962405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.578 [2024-11-19 03:15:59.977788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.578 [2024-11-19 03:15:59.977821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.578 [2024-11-19 03:15:59.977839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.578 [2024-11-19 03:15:59.993902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.579 [2024-11-19 03:15:59.993934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.579 [2024-11-19 03:15:59.993958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.579 [2024-11-19 03:16:00.007963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.579 [2024-11-19 03:16:00.008021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.579 [2024-11-19 03:16:00.008054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.579 [2024-11-19 03:16:00.021017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.579 [2024-11-19 03:16:00.021066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.579 [2024-11-19 03:16:00.021083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.579 [2024-11-19 03:16:00.039814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.579 [2024-11-19 03:16:00.039855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.579 [2024-11-19 03:16:00.039875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.579 [2024-11-19 03:16:00.051343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.579 [2024-11-19 03:16:00.051375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.579 [2024-11-19 03:16:00.051392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.579 [2024-11-19 03:16:00.065505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.579 [2024-11-19 03:16:00.065537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.579 [2024-11-19 03:16:00.065564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.579 [2024-11-19 03:16:00.081445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.579 [2024-11-19 03:16:00.081478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.579 [2024-11-19 03:16:00.081495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.579 [2024-11-19 03:16:00.095637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.579 [2024-11-19 03:16:00.095672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.579 [2024-11-19 03:16:00.095697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.579 [2024-11-19 03:16:00.108800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.579 [2024-11-19 03:16:00.108835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.579 [2024-11-19 03:16:00.108853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.579 [2024-11-19 03:16:00.123946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.579 [2024-11-19 03:16:00.123981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.579 [2024-11-19 03:16:00.124000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.579 [2024-11-19 03:16:00.136768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.579 [2024-11-19 03:16:00.136802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.579 [2024-11-19 03:16:00.136821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.579 [2024-11-19 03:16:00.149488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.579 [2024-11-19 03:16:00.149521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.579 [2024-11-19 03:16:00.149553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.579 [2024-11-19 03:16:00.164804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.579 [2024-11-19 03:16:00.164838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.579 [2024-11-19 03:16:00.164857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.579 [2024-11-19 03:16:00.176015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.579 [2024-11-19 03:16:00.176069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.579 [2024-11-19 03:16:00.176091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.579 [2024-11-19 03:16:00.192227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.579 [2024-11-19 03:16:00.192273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.579 [2024-11-19 03:16:00.192292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.837 [2024-11-19 03:16:00.206826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.837 [2024-11-19 03:16:00.206860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.837 [2024-11-19 03:16:00.206880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.837 [2024-11-19 03:16:00.218516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.837 [2024-11-19 03:16:00.218550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.837 [2024-11-19 03:16:00.218569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.837 [2024-11-19 03:16:00.232644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.837 [2024-11-19 03:16:00.232703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.837 [2024-11-19 03:16:00.232726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.837 [2024-11-19 03:16:00.243958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.837 [2024-11-19 03:16:00.243992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.837 [2024-11-19 03:16:00.244010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.837 [2024-11-19 03:16:00.258513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.837 [2024-11-19 03:16:00.258546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.837 [2024-11-19 03:16:00.258564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.837 [2024-11-19 03:16:00.271413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.837 [2024-11-19 03:16:00.271445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.838 [2024-11-19 03:16:00.271462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.838 [2024-11-19 03:16:00.284233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.838 [2024-11-19 03:16:00.284264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.838 [2024-11-19 03:16:00.284281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.838 [2024-11-19 03:16:00.297079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.838 [2024-11-19 03:16:00.297111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.838 [2024-11-19 03:16:00.297127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.838 [2024-11-19 03:16:00.309887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.838 [2024-11-19 03:16:00.309921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.838 [2024-11-19 03:16:00.309939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.838 [2024-11-19 03:16:00.322593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.838 [2024-11-19 03:16:00.322625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.838 [2024-11-19 03:16:00.322643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.838 [2024-11-19 03:16:00.335428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.838 [2024-11-19 03:16:00.335459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.838 [2024-11-19 03:16:00.335476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.838 [2024-11-19 03:16:00.348206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.838 [2024-11-19 03:16:00.348252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.838 [2024-11-19 03:16:00.348270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.838 [2024-11-19 03:16:00.360541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.838 [2024-11-19 03:16:00.360572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.838 [2024-11-19 03:16:00.360588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.838 [2024-11-19 03:16:00.375295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.838 [2024-11-19 03:16:00.375328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.838 [2024-11-19 03:16:00.375345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.838 [2024-11-19 03:16:00.388313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.838 [2024-11-19 03:16:00.388362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.838 [2024-11-19 03:16:00.388387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.838 [2024-11-19 03:16:00.401370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.838 [2024-11-19 03:16:00.401405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.838 [2024-11-19 03:16:00.401438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.838 [2024-11-19 03:16:00.414011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.838 [2024-11-19 03:16:00.414058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.838 [2024-11-19 03:16:00.414082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.838 [2024-11-19 03:16:00.427067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.838 [2024-11-19 03:16:00.427101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.838 [2024-11-19 03:16:00.427119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.838 [2024-11-19 03:16:00.440487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.838 [2024-11-19 03:16:00.440525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.838 [2024-11-19 03:16:00.440544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.838 [2024-11-19 03:16:00.454626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:49.838 [2024-11-19 03:16:00.454660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.838 [2024-11-19 03:16:00.454679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.096 [2024-11-19 03:16:00.466475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:50.096 [2024-11-19 03:16:00.466506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.096 [2024-11-19 03:16:00.466522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.096 18823.50 IOPS, 73.53 MiB/s [2024-11-19T02:16:00.711Z] [2024-11-19 03:16:00.481462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b83f0) 00:34:50.096 [2024-11-19 03:16:00.481493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.096 [2024-11-19 03:16:00.481510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.096 00:34:50.096 Latency(us) 00:34:50.096 [2024-11-19T02:16:00.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.096 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:50.096 nvme0n1 : 2.05 18455.27 72.09 0.00 0.00 6786.32 3568.07 49321.91 00:34:50.096 [2024-11-19T02:16:00.711Z] =================================================================================================================== 00:34:50.096 [2024-11-19T02:16:00.711Z] Total : 18455.27 72.09 0.00 0.00 6786.32 3568.07 49321.91 00:34:50.096 { 00:34:50.096 "results": [ 00:34:50.096 { 00:34:50.096 "job": "nvme0n1", 00:34:50.096 "core_mask": "0x2", 00:34:50.096 "workload": "randread", 00:34:50.096 "status": "finished", 00:34:50.096 "queue_depth": 128, 00:34:50.096 "io_size": 4096, 00:34:50.096 "runtime": 2.046841, 00:34:50.096 "iops": 18455.268386748165, 00:34:50.096 "mibps": 72.09089213573502, 00:34:50.096 "io_failed": 0, 00:34:50.096 "io_timeout": 0, 00:34:50.096 "avg_latency_us": 6786.320348809961, 00:34:50.096 "min_latency_us": 3568.071111111111, 00:34:50.096 "max_latency_us": 49321.90814814815 00:34:50.096 } 00:34:50.096 ], 00:34:50.096 "core_count": 1 00:34:50.096 } 00:34:50.096 03:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:50.096 03:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:50.096 03:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:50.096 | .driver_specific 00:34:50.096 | .nvme_error 00:34:50.096 | .status_code 00:34:50.096 | .command_transient_transport_error' 00:34:50.096 03:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:50.354 03:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 148 > 0 )) 00:34:50.354 03:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 396160 00:34:50.354 03:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 396160 ']' 00:34:50.354 03:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 396160 00:34:50.354 03:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:50.354 03:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:50.354 03:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396160 00:34:50.354 03:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:50.354 03:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:50.354 03:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396160' 00:34:50.354 killing process with pid 396160 00:34:50.354 03:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 396160 00:34:50.354 Received shutdown signal, test time was about 2.000000 seconds 00:34:50.354 00:34:50.354 Latency(us) 00:34:50.354 [2024-11-19T02:16:00.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.354 [2024-11-19T02:16:00.969Z] =================================================================================================================== 00:34:50.354 [2024-11-19T02:16:00.969Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:50.354 03:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 396160 00:34:50.612 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:50.612 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:50.612 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:50.612 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:50.612 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:50.612 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=396562 00:34:50.612 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:50.612 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 396562 /var/tmp/bperf.sock 00:34:50.612 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 396562 ']' 00:34:50.612 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:50.612 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:50.612 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:50.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:50.612 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:50.612 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:50.612 [2024-11-19 03:16:01.100069] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:34:50.612 [2024-11-19 03:16:01.100176] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396562 ] 00:34:50.612 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:50.612 Zero copy mechanism will not be used. 00:34:50.612 [2024-11-19 03:16:01.171161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.612 [2024-11-19 03:16:01.219022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:50.870 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:50.870 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:50.870 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:50.870 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:51.129 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:51.129 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.129 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:51.129 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.129 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:51.129 03:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:51.387 nvme0n1 00:34:51.647 03:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:51.647 03:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.647 03:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:51.647 03:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.647 03:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:51.647 03:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:51.647 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:51.647 Zero copy mechanism will not be used. 00:34:51.647 Running I/O for 2 seconds... 00:34:51.647 [2024-11-19 03:16:02.132392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.132469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.132490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.137497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.137532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.137559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.142366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.142406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.142427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.147190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.147223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.147250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.151949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.152005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.152023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.157047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.157079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.157097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.162083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.162115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.162134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.166775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.166808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.166826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.171634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.171665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.171711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.176343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.176374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.176396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.181163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.181208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.181225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.184575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.184605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.184623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.188250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.188281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.188306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.192182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.192213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.192232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.195555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.195586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.195603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.198583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.198628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.198645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.203027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.203058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.203082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.206940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.206971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.206990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.210845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.647 [2024-11-19 03:16:02.210876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.647 [2024-11-19 03:16:02.210895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.647 [2024-11-19 03:16:02.213759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.648 [2024-11-19 03:16:02.213796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.648 [2024-11-19 03:16:02.213814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.648 [2024-11-19 03:16:02.218660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.648 [2024-11-19 03:16:02.218728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.648 [2024-11-19 03:16:02.218748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.648 [2024-11-19 03:16:02.224562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.648 [2024-11-19 03:16:02.224596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.648 [2024-11-19 03:16:02.224613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.648 [2024-11-19 03:16:02.231437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.648 [2024-11-19 03:16:02.231488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.648 [2024-11-19 03:16:02.231513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.648 [2024-11-19 03:16:02.238011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.648 [2024-11-19 03:16:02.238044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.648 [2024-11-19 03:16:02.238085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.648 [2024-11-19 03:16:02.244529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.648 [2024-11-19 03:16:02.244562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.648 [2024-11-19 03:16:02.244591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.648 [2024-11-19 03:16:02.250988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.648 [2024-11-19 03:16:02.251025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.648 [2024-11-19 03:16:02.251042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.648 [2024-11-19 03:16:02.257382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.648 [2024-11-19 03:16:02.257414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.648 [2024-11-19 03:16:02.257456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.648 [2024-11-19 03:16:02.263959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.648 [2024-11-19 03:16:02.264004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.648 [2024-11-19 03:16:02.264022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.908 [2024-11-19 03:16:02.270567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.908 [2024-11-19 03:16:02.270600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.908 [2024-11-19 03:16:02.270617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.908 [2024-11-19 03:16:02.277209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.908 [2024-11-19 03:16:02.277256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.908 [2024-11-19 03:16:02.277275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.908 [2024-11-19 03:16:02.283713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.908 [2024-11-19 03:16:02.283747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.908 [2024-11-19 03:16:02.283765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.908 [2024-11-19 03:16:02.289987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.908 [2024-11-19 03:16:02.290043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.908 [2024-11-19 03:16:02.290061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.908 [2024-11-19 03:16:02.293909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.908 [2024-11-19 03:16:02.293941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.908 [2024-11-19 03:16:02.293959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.908 [2024-11-19 03:16:02.297568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.908 [2024-11-19 03:16:02.297598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.908 [2024-11-19 03:16:02.297619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.908 [2024-11-19 03:16:02.302077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.908 [2024-11-19 03:16:02.302123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.908 [2024-11-19 03:16:02.302149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.908 [2024-11-19 03:16:02.306854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.908 [2024-11-19 03:16:02.306887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.908 [2024-11-19 03:16:02.306904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.908 [2024-11-19 03:16:02.311615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.908 [2024-11-19 03:16:02.311644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.908 [2024-11-19 03:16:02.311700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.908 [2024-11-19 03:16:02.316094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.908 [2024-11-19 03:16:02.316146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.316173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.320825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.320856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.320873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.325380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.325410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.325428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.329884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.329916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.329933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.334379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.334408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.334427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.339799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.339833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.339851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.344845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.344878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.344896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.350754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.350787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.350804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.358342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.358394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.358412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.366218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.366250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.366271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.374000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.374057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.374073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.382299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.382328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.382345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.389978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.390026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.390044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.397775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.397822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.397840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.405204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.405235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.405257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.412980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.413028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.413051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.421044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.421091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.421110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.428577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.428622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.428641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.435388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.435435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.435452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.443049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.443094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.443112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.450748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.450779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.450798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.458342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.458388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.458413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.466027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.466058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.466078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.472896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.472928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.472962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.909 [2024-11-19 03:16:02.480905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.909 [2024-11-19 03:16:02.480938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.909 [2024-11-19 03:16:02.480970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.910 [2024-11-19 03:16:02.488234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.910 [2024-11-19 03:16:02.488266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.910 [2024-11-19 03:16:02.488295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.910 [2024-11-19 03:16:02.495855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.910 [2024-11-19 03:16:02.495888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.910 [2024-11-19 03:16:02.495905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.910 [2024-11-19 03:16:02.503850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.910 [2024-11-19 03:16:02.503898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.910 [2024-11-19 03:16:02.503917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.910 [2024-11-19 03:16:02.511799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.910 [2024-11-19 03:16:02.511832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.910 [2024-11-19 03:16:02.511850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.910 [2024-11-19 03:16:02.518291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.910 [2024-11-19 03:16:02.518322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.910 [2024-11-19 03:16:02.518343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.910 [2024-11-19 03:16:02.523263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:51.910 [2024-11-19 03:16:02.523295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.910 [2024-11-19 03:16:02.523312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.528004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.528033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.528065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.532839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.532872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.532891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.537489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.537521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.537539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.542725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.542778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.542811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.548767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.548800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.548821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.556294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.556326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.556346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.562427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.562459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.562479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.569152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.569183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.569203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.574898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.574931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.574951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.580448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.580480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.580511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.586071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.586103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.586121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.590973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.591005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.591037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.596957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.596990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.597014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.604721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.604768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.604786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.611267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.611298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.611318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.619234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.619265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.619292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.626311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.626343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.626361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.632062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.632094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.632113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.636989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.637022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.637039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.642511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.642542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.642584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.648728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.170 [2024-11-19 03:16:02.648766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.170 [2024-11-19 03:16:02.648785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.170 [2024-11-19 03:16:02.654356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.654403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.654421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.659522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.659553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.659572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.662796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.662827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.662845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.667751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.667782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.667799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.672889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.672934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.672950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.677744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.677776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.677795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.683396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.683427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.683446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.690874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.690906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.690923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.697365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.697396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.697419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.703018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.703048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.703066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.709022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.709051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.709067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.715780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.715811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.715831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.721586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.721631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.721650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.726830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.726875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.726894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.732000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.732046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.732062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.736638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.736667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.736708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.741363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.741407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.741429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.746134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.746179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.746196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.750764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.750796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.750815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.755476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.755506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.755527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.759922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.759953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.759991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.764456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.764486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.764505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.768640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.768685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.768719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.771537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.771566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.771587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.775730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.775762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.775786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.778698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.778735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.778753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.782243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.171 [2024-11-19 03:16:02.782289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.171 [2024-11-19 03:16:02.782306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.171 [2024-11-19 03:16:02.785506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.172 [2024-11-19 03:16:02.785536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.172 [2024-11-19 03:16:02.785556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.431 [2024-11-19 03:16:02.789114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.431 [2024-11-19 03:16:02.789144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.789161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.792483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.792514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.792532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.796075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.796107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.796124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.799803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.799834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.799853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.804343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.804374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.804391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.808941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.808973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.808990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.813491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.813522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.813539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.818088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.818119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.818136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.822825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.822855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.822871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.828298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.828344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.828360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.833524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.833569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.833585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.838497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.838530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.838562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.842947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.842980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.843012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.847460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.847507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.847524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.852026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.852072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.852095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.857981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.858015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.858051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.863095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.863127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.863144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.867768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.867800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.867817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.872516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.872547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.872564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.878153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.878186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.878203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.883739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.883769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.883785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.890914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.890946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.890978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.898799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.898833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.898851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.905978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.906030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.906048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.914117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.914163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.914179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.921811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.921858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.921875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.432 [2024-11-19 03:16:02.928817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.432 [2024-11-19 03:16:02.928870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.432 [2024-11-19 03:16:02.928890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:02.934114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:02.934146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:02.934163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:02.938661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:02.938714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:02.938746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:02.943256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:02.943289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:02.943306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:02.948419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:02.948451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:02.948483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:02.954519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:02.954551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:02.954582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:02.962134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:02.962166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:02.962197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:02.967726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:02.967773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:02.967791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:02.973115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:02.973148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:02.973165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:02.978476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:02.978522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:02.978539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:02.984326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:02.984361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:02.984379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:02.988994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:02.989026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:02.989044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:02.993686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:02.993726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:02.993744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:02.998319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:02.998351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:02.998368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:03.003552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:03.003583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:03.003606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:03.008623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:03.008654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:03.008671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:03.014060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:03.014092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:03.014109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:03.021155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:03.021188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:03.021206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:03.027914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:03.027949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:03.027969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:03.033507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:03.033538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:03.033556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:03.039074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:03.039105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:03.039123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.433 [2024-11-19 03:16:03.044716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.433 [2024-11-19 03:16:03.044747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.433 [2024-11-19 03:16:03.044768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.693 [2024-11-19 03:16:03.050334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.693 [2024-11-19 03:16:03.050366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.693 [2024-11-19 03:16:03.050383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.693 [2024-11-19 03:16:03.056092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.693 [2024-11-19 03:16:03.056124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.693 [2024-11-19 03:16:03.056141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.693 [2024-11-19 03:16:03.060711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.693 [2024-11-19 03:16:03.060744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.693 [2024-11-19 03:16:03.060762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.693 [2024-11-19 03:16:03.065409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.693 [2024-11-19 03:16:03.065441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.693 [2024-11-19 03:16:03.065458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.693 [2024-11-19 03:16:03.068842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.693 [2024-11-19 03:16:03.068875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.693 [2024-11-19 03:16:03.068893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.693 [2024-11-19 03:16:03.073083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.693 [2024-11-19 03:16:03.073115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.693 [2024-11-19 03:16:03.073133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.693 [2024-11-19 03:16:03.079247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.693 [2024-11-19 03:16:03.079279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.693 [2024-11-19 03:16:03.079295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.086947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.087002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.087020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.092976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.093023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.093040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.099445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.099476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.099498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.104302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.104334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.104351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.108898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.108930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.108949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.113475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.113505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.113522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.118165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.118213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.118230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.122785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.122816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.122834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.694 5570.00 IOPS, 696.25 MiB/s [2024-11-19T02:16:03.309Z] [2024-11-19 03:16:03.129041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.129073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.129103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.133514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.133545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.133563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.138195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.138225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.138241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.142761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.142798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.142816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.147247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.147278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.147294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.152328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.152374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.152391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.157620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.157654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.157672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.163904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.163937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.163954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.169857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.169890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.169908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.175242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.175275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.175293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.181473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.181505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.181522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.187757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.187791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.187809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.193817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.193850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.193868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.197605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.197637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.197654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.201521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.201553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.201570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.207155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.207186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.207202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.211857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.211889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.211907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.216428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.694 [2024-11-19 03:16:03.216460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.694 [2024-11-19 03:16:03.216476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.694 [2024-11-19 03:16:03.221148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.695 [2024-11-19 03:16:03.221196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.695 [2024-11-19 03:16:03.221213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.695 [2024-11-19 03:16:03.226542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.695 [2024-11-19 03:16:03.226589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.695 [2024-11-19 03:16:03.226606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.695 [2024-11-19 03:16:03.231575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.695 [2024-11-19 03:16:03.231604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.695 [2024-11-19 03:16:03.231626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.695 [2024-11-19 03:16:03.237268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.695 [2024-11-19 03:16:03.237316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.695 [2024-11-19 03:16:03.237333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.695 [2024-11-19 03:16:03.243096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.695 [2024-11-19 03:16:03.243129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.695 [2024-11-19 03:16:03.243145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.695 [2024-11-19 03:16:03.248440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.695 [2024-11-19 03:16:03.248470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.695 [2024-11-19 03:16:03.248487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.695 [2024-11-19 03:16:03.253542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.695 [2024-11-19 03:16:03.253573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.695 [2024-11-19 03:16:03.253590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.695 [2024-11-19 03:16:03.259277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.695 [2024-11-19 03:16:03.259325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.695 [2024-11-19 03:16:03.259342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.695 [2024-11-19 03:16:03.265493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.695 [2024-11-19 03:16:03.265540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.695 [2024-11-19 03:16:03.265556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.695 [2024-11-19 03:16:03.270855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.695 [2024-11-19 03:16:03.270887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.695 [2024-11-19 03:16:03.270904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.695 [2024-11-19 03:16:03.276822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.695 [2024-11-19 03:16:03.276856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.695 [2024-11-19 03:16:03.276873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.695 [2024-11-19 03:16:03.282744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.695 [2024-11-19 03:16:03.282798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.695 [2024-11-19 03:16:03.282816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.695 [2024-11-19 03:16:03.288697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.695 [2024-11-19 03:16:03.288731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.695 [2024-11-19 03:16:03.288763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.695 [2024-11-19 03:16:03.295589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.695 [2024-11-19 03:16:03.295621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.695 [2024-11-19 03:16:03.295637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.695 [2024-11-19 03:16:03.301418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.695 [2024-11-19 03:16:03.301449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.695 [2024-11-19 03:16:03.301466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.695 [2024-11-19 03:16:03.306783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.695 [2024-11-19 03:16:03.306830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.695 [2024-11-19 03:16:03.306846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.955 [2024-11-19 03:16:03.311746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.955 [2024-11-19 03:16:03.311780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.955 [2024-11-19 03:16:03.311798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.955 [2024-11-19 03:16:03.317868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.955 [2024-11-19 03:16:03.317916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.955 [2024-11-19 03:16:03.317935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.955 [2024-11-19 03:16:03.324787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.955 [2024-11-19 03:16:03.324836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.955 [2024-11-19 03:16:03.324855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.955 [2024-11-19 03:16:03.332426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.955 [2024-11-19 03:16:03.332459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.955 [2024-11-19 03:16:03.332476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.955 [2024-11-19 03:16:03.339931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.955 [2024-11-19 03:16:03.339964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.955 [2024-11-19 03:16:03.339997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.955 [2024-11-19 03:16:03.348013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.955 [2024-11-19 03:16:03.348071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.955 [2024-11-19 03:16:03.348088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.955 [2024-11-19 03:16:03.355823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.955 [2024-11-19 03:16:03.355856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.955 [2024-11-19 03:16:03.355874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.955 [2024-11-19 03:16:03.362212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.955 [2024-11-19 03:16:03.362246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.955 [2024-11-19 03:16:03.362264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.955 [2024-11-19 03:16:03.366901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.955 [2024-11-19 03:16:03.366934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.955 [2024-11-19 03:16:03.366952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.955 [2024-11-19 03:16:03.371501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.955 [2024-11-19 03:16:03.371538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.955 [2024-11-19 03:16:03.371556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.955 [2024-11-19 03:16:03.376329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.955 [2024-11-19 03:16:03.376359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.955 [2024-11-19 03:16:03.376376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.955 [2024-11-19 03:16:03.381181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.955 [2024-11-19 03:16:03.381212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.955 [2024-11-19 03:16:03.381230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.955 [2024-11-19 03:16:03.387132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.955 [2024-11-19 03:16:03.387164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.955 [2024-11-19 03:16:03.387188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.955 [2024-11-19 03:16:03.394172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.955 [2024-11-19 03:16:03.394205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.955 [2024-11-19 03:16:03.394223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.955 [2024-11-19 03:16:03.401236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.955 [2024-11-19 03:16:03.401269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.955 [2024-11-19 03:16:03.401286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.955 [2024-11-19 03:16:03.406903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.955 [2024-11-19 03:16:03.406936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.955 [2024-11-19 03:16:03.406954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.955 [2024-11-19 03:16:03.412601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.955 [2024-11-19 03:16:03.412636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.955 [2024-11-19 03:16:03.412653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.417865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.417899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.417916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.424263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.424294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.424311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.431032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.431068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.431087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.435277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.435323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.435338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.442895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.442926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.442943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.450827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.450871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.450888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.458653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.458713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.458732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.465280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.465312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.465330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.471322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.471368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.471386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.477235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.477268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.477286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.483566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.483614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.483631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.490518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.490551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.490568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.496312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.496345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.496367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.502171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.502203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.502221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.507984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.508032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.508049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.514239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.514271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.514302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.520489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.520522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.520556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.526295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.526330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.526350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.529590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.529623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.529640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.535029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.535074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.535091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.541236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.541283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.541300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.546929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.546969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.546987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.551920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.551953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.551985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.558306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.558338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.558354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.563711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.563757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.563773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.956 [2024-11-19 03:16:03.568817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:52.956 [2024-11-19 03:16:03.568849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.956 [2024-11-19 03:16:03.568867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.573987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.574036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.574053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.579951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.580003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.580020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.585711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.585744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.585762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.590840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.590872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.590890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.596654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.596711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.596745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.602243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.602276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.602295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.608041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.608074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.608092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.613474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.613507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.613524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.618276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.618309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.618326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.623579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.623611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.623628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.628880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.628914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.628931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.634067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.634100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.634117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.638567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.638598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.638621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.643132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.643164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.643181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.647753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.647785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.647803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.652808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.652841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.652859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.657885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.657918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.657936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.662604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.662636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.662668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.667401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.667448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.667468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.672484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.672516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.672533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.675255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.217 [2024-11-19 03:16:03.675286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.217 [2024-11-19 03:16:03.675303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.217 [2024-11-19 03:16:03.679416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.679453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.679471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.683517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.683549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.683565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.687965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.688012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.688030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.692404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.692435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.692451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.696932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.696963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.696998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.701465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.701496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.701512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.705930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.705961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.705978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.710474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.710506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.710523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.715430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.715463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.715481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.721489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.721522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.721539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.729017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.729050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.729081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.735108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.735155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.735172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.741494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.741527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.741560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.747884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.747918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.747935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.753669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.753725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.753749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.757312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.757344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.757361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.760726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.760772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.760789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.765963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.766008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.766030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.770587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.770634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.770650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.775082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.775111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.775128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.779676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.779726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.779744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.784233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.784267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.784284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.789062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.789093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.789110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.793539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.793569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.793585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.798589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.798620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.798636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.802670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.802725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.802744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.218 [2024-11-19 03:16:03.807646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.218 [2024-11-19 03:16:03.807703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.218 [2024-11-19 03:16:03.807737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.219 [2024-11-19 03:16:03.812569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.219 [2024-11-19 03:16:03.812600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.219 [2024-11-19 03:16:03.812632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.219 [2024-11-19 03:16:03.817284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.219 [2024-11-19 03:16:03.817315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.219 [2024-11-19 03:16:03.817332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.219 [2024-11-19 03:16:03.822284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.219 [2024-11-19 03:16:03.822334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.219 [2024-11-19 03:16:03.822354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.219 [2024-11-19 03:16:03.827565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.219 [2024-11-19 03:16:03.827595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.219 [2024-11-19 03:16:03.827612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.479 [2024-11-19 03:16:03.834961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.479 [2024-11-19 03:16:03.835004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.479 [2024-11-19 03:16:03.835022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.479 [2024-11-19 03:16:03.840345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.479 [2024-11-19 03:16:03.840379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.479 [2024-11-19 03:16:03.840396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.479 [2024-11-19 03:16:03.845518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.479 [2024-11-19 03:16:03.845552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.479 [2024-11-19 03:16:03.845585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.479 [2024-11-19 03:16:03.850814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.479 [2024-11-19 03:16:03.850847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.479 [2024-11-19 03:16:03.850865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.479 [2024-11-19 03:16:03.854974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.479 [2024-11-19 03:16:03.855015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.479 [2024-11-19 03:16:03.855038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.479 [2024-11-19 03:16:03.859698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.479 [2024-11-19 03:16:03.859731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.479 [2024-11-19 03:16:03.859748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.864999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.865032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.865051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.870421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.870453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.870471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.876244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.876278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.876296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.881498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.881531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.881549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.887275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.887309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.887341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.893269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.893303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.893321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.899428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.899462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.899488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.905566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.905600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.905618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.911310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.911345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.911363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.917223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.917257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.917275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.923223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.923258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.923275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.928477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.928509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.928527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.932962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.932995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.933012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.937985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.938018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.938036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.942789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.942824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.942842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.948109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.948141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.948159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.952734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.952766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.952783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.956164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.956195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.956212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.961171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.961202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.961219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.966304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.966336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.966368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.971125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.971157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.971174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.976458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.976489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.976506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.981706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.981739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.981756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.986368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.986400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.986423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.991939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.991973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.992005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:03.997861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:03.997893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-19 03:16:03.997911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.480 [2024-11-19 03:16:04.004035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.480 [2024-11-19 03:16:04.004066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-19 03:16:04.004084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.481 [2024-11-19 03:16:04.010412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.481 [2024-11-19 03:16:04.010444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-19 03:16:04.010461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.481 [2024-11-19 03:16:04.015633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.481 [2024-11-19 03:16:04.015665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-19 03:16:04.015709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.481 [2024-11-19 03:16:04.021715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.481 [2024-11-19 03:16:04.021763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-19 03:16:04.021782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.481 [2024-11-19 03:16:04.027829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.481 [2024-11-19 03:16:04.027863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-19 03:16:04.027881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.481 [2024-11-19 03:16:04.033909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.481 [2024-11-19 03:16:04.033943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-19 03:16:04.033961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.481 [2024-11-19 03:16:04.039938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.481 [2024-11-19 03:16:04.039979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-19 03:16:04.040011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.481 [2024-11-19 03:16:04.045486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.481 [2024-11-19 03:16:04.045518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-19 03:16:04.045536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.481 [2024-11-19 03:16:04.051277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.481 [2024-11-19 03:16:04.051310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-19 03:16:04.051327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.481 [2024-11-19 03:16:04.057444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.481 [2024-11-19 03:16:04.057477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-19 03:16:04.057495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.741 [2024-11-19 03:16:04.098824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.741 [2024-11-19 03:16:04.098876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.741 [2024-11-19 03:16:04.098896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.741 [2024-11-19 03:16:04.104333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.741 [2024-11-19 03:16:04.104366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.741 [2024-11-19 03:16:04.104383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.741 [2024-11-19 03:16:04.109514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.741 [2024-11-19 03:16:04.109545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.741 [2024-11-19 03:16:04.109562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.741 [2024-11-19 03:16:04.114195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.741 [2024-11-19 03:16:04.114226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.741 [2024-11-19 03:16:04.114243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.741 [2024-11-19 03:16:04.119730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.741 [2024-11-19 03:16:04.119769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.741 [2024-11-19 03:16:04.119793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.741 [2024-11-19 03:16:04.127015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cb920) 00:34:53.741 [2024-11-19 03:16:04.127048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.741 [2024-11-19 03:16:04.127066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.741 5520.50 IOPS, 690.06 MiB/s 00:34:53.741 Latency(us) 00:34:53.741 [2024-11-19T02:16:04.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.741 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:53.741 nvme0n1 : 2.00 5520.73 690.09 0.00 0.00 2894.11 731.21 41166.32 00:34:53.741 [2024-11-19T02:16:04.356Z] =================================================================================================================== 00:34:53.741 [2024-11-19T02:16:04.356Z] Total : 5520.73 690.09 0.00 0.00 2894.11 731.21 41166.32 00:34:53.741 { 00:34:53.741 "results": [ 00:34:53.741 { 00:34:53.741 "job": "nvme0n1", 00:34:53.741 "core_mask": "0x2", 00:34:53.741 "workload": "randread", 00:34:53.741 "status": "finished", 00:34:53.741 "queue_depth": 16, 00:34:53.741 "io_size": 131072, 00:34:53.741 "runtime": 2.002816, 00:34:53.741 "iops": 5520.726816642168, 00:34:53.741 "mibps": 690.090852080271, 00:34:53.741 "io_failed": 0, 00:34:53.741 "io_timeout": 0, 00:34:53.741 "avg_latency_us": 2894.10672428058, 00:34:53.741 "min_latency_us": 731.2118518518519, 00:34:53.741 "max_latency_us": 41166.317037037035 00:34:53.741 } 00:34:53.741 ], 00:34:53.741 "core_count": 1 00:34:53.741 } 00:34:53.741 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:53.741 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:53.741 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:53.741 | .driver_specific 00:34:53.741 | .nvme_error 00:34:53.741 | .status_code 00:34:53.741 | .command_transient_transport_error' 00:34:53.741 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:54.000 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 357 > 0 )) 00:34:54.000 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 396562 00:34:54.000 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 396562 ']' 00:34:54.000 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 396562 00:34:54.000 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:54.000 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:54.000 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396562 00:34:54.000 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:54.000 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:54.000 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396562' 00:34:54.000 killing process with pid 396562 00:34:54.000 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 396562 00:34:54.000 Received shutdown signal, test time was about 2.000000 seconds 00:34:54.000 00:34:54.000 Latency(us) 00:34:54.000 [2024-11-19T02:16:04.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.000 [2024-11-19T02:16:04.615Z] =================================================================================================================== 00:34:54.000 [2024-11-19T02:16:04.615Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:54.000 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 396562 00:34:54.259 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:54.259 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:54.259 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:54.259 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:54.259 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:54.259 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=397086 00:34:54.259 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:54.259 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 397086 /var/tmp/bperf.sock 00:34:54.259 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 397086 ']' 00:34:54.259 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:54.259 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:54.259 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:54.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:54.259 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:54.259 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:54.259 [2024-11-19 03:16:04.697252] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:34:54.259 [2024-11-19 03:16:04.697349] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397086 ] 00:34:54.259 [2024-11-19 03:16:04.764566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.259 [2024-11-19 03:16:04.809825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.518 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:54.518 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:54.518 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:54.518 03:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:54.778 03:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:54.778 03:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.778 03:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:54.778 03:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.778 03:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:54.778 03:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:55.036 nvme0n1 00:34:55.036 03:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:55.036 03:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.036 03:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:55.036 03:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.036 03:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:55.036 03:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:55.295 Running I/O for 2 seconds... 00:34:55.295 [2024-11-19 03:16:05.710061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166f0bc0 00:34:55.295 [2024-11-19 03:16:05.711282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.295 [2024-11-19 03:16:05.711325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:55.295 [2024-11-19 03:16:05.722793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166f9f68 00:34:55.295 [2024-11-19 03:16:05.724201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.295 [2024-11-19 03:16:05.724249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:55.295 [2024-11-19 03:16:05.734793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166ed0b0 00:34:55.295 [2024-11-19 03:16:05.735852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.295 [2024-11-19 03:16:05.735883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:55.295 [2024-11-19 03:16:05.746374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166dfdc0 00:34:55.295 [2024-11-19 03:16:05.747600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.295 [2024-11-19 03:16:05.747631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:55.295 [2024-11-19 03:16:05.758319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166f5378 00:34:55.295 [2024-11-19 03:16:05.759439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.295 [2024-11-19 03:16:05.759484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:55.295 [2024-11-19 03:16:05.769542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166f9b30 00:34:55.295 [2024-11-19 03:16:05.770533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.295 [2024-11-19 03:16:05.770578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:55.295 [2024-11-19 03:16:05.780656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166de038 00:34:55.295 [2024-11-19 03:16:05.781467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.295 [2024-11-19 03:16:05.781512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:55.295 [2024-11-19 03:16:05.795826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166fe2e8 00:34:55.295 [2024-11-19 03:16:05.797593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.295 [2024-11-19 03:16:05.797636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:55.296 [2024-11-19 03:16:05.804205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e1b48 00:34:55.296 [2024-11-19 03:16:05.805045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.296 [2024-11-19 03:16:05.805089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:55.296 [2024-11-19 03:16:05.815442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166fcdd0 00:34:55.296 [2024-11-19 03:16:05.816235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.296 [2024-11-19 03:16:05.816280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:55.296 [2024-11-19 03:16:05.827399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166eaab8 00:34:55.296 [2024-11-19 03:16:05.828256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.296 [2024-11-19 03:16:05.828300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:55.296 [2024-11-19 03:16:05.841982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e95a0 00:34:55.296 [2024-11-19 03:16:05.843329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.296 [2024-11-19 03:16:05.843375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:55.296 [2024-11-19 03:16:05.853617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e9168 00:34:55.296 [2024-11-19 03:16:05.855200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.296 [2024-11-19 03:16:05.855245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:55.296 [2024-11-19 03:16:05.864417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166feb58 00:34:55.296 [2024-11-19 03:16:05.865670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.296 [2024-11-19 03:16:05.865707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.296 [2024-11-19 03:16:05.874815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166f6020 00:34:55.296 [2024-11-19 03:16:05.875793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.296 [2024-11-19 03:16:05.875837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:55.296 [2024-11-19 03:16:05.889297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.296 [2024-11-19 03:16:05.889595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.296 [2024-11-19 03:16:05.889644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.296 [2024-11-19 03:16:05.903528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.296 [2024-11-19 03:16:05.903780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.296 [2024-11-19 03:16:05.903822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.555 [2024-11-19 03:16:05.917449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.555 [2024-11-19 03:16:05.917747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.555 [2024-11-19 03:16:05.917790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.555 [2024-11-19 03:16:05.930896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.555 [2024-11-19 03:16:05.931150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.555 [2024-11-19 03:16:05.931195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.555 [2024-11-19 03:16:05.944941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.555 [2024-11-19 03:16:05.945176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.555 [2024-11-19 03:16:05.945219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.555 [2024-11-19 03:16:05.959130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.555 [2024-11-19 03:16:05.959381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.555 [2024-11-19 03:16:05.959425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.555 [2024-11-19 03:16:05.973239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.555 [2024-11-19 03:16:05.973494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.555 [2024-11-19 03:16:05.973535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.555 [2024-11-19 03:16:05.987383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.555 [2024-11-19 03:16:05.987603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.555 [2024-11-19 03:16:05.987648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.555 [2024-11-19 03:16:06.001736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.555 [2024-11-19 03:16:06.001983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.555 [2024-11-19 03:16:06.002026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.555 [2024-11-19 03:16:06.015985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.555 [2024-11-19 03:16:06.016269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.556 [2024-11-19 03:16:06.016315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.556 [2024-11-19 03:16:06.030225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.556 [2024-11-19 03:16:06.030496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.556 [2024-11-19 03:16:06.030541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.556 [2024-11-19 03:16:06.044526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.556 [2024-11-19 03:16:06.044803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.556 [2024-11-19 03:16:06.044847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.556 [2024-11-19 03:16:06.058612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.556 [2024-11-19 03:16:06.058897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.556 [2024-11-19 03:16:06.058941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.556 [2024-11-19 03:16:06.072852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.556 [2024-11-19 03:16:06.073124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.556 [2024-11-19 03:16:06.073151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.556 [2024-11-19 03:16:06.086972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.556 [2024-11-19 03:16:06.087205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.556 [2024-11-19 03:16:06.087247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.556 [2024-11-19 03:16:06.101433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.556 [2024-11-19 03:16:06.101710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.556 [2024-11-19 03:16:06.101754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.556 [2024-11-19 03:16:06.115535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.556 [2024-11-19 03:16:06.115801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.556 [2024-11-19 03:16:06.115830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.556 [2024-11-19 03:16:06.129924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.556 [2024-11-19 03:16:06.130201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.556 [2024-11-19 03:16:06.130246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.556 [2024-11-19 03:16:06.143801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.556 [2024-11-19 03:16:06.144117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.556 [2024-11-19 03:16:06.144147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.556 [2024-11-19 03:16:06.157661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.556 [2024-11-19 03:16:06.157874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.556 [2024-11-19 03:16:06.157918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.556 [2024-11-19 03:16:06.171417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.556 [2024-11-19 03:16:06.171641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.556 [2024-11-19 03:16:06.171671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.815 [2024-11-19 03:16:06.184866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.815 [2024-11-19 03:16:06.185111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.815 [2024-11-19 03:16:06.185156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.815 [2024-11-19 03:16:06.198750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.815 [2024-11-19 03:16:06.198961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.815 [2024-11-19 03:16:06.199016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.815 [2024-11-19 03:16:06.212309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.815 [2024-11-19 03:16:06.212553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.815 [2024-11-19 03:16:06.212598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.815 [2024-11-19 03:16:06.225918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.815 [2024-11-19 03:16:06.226154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.815 [2024-11-19 03:16:06.226197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.815 [2024-11-19 03:16:06.239632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.816 [2024-11-19 03:16:06.239957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.816 [2024-11-19 03:16:06.239988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.816 [2024-11-19 03:16:06.253374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.816 [2024-11-19 03:16:06.253611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.816 [2024-11-19 03:16:06.253655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.816 [2024-11-19 03:16:06.267165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.816 [2024-11-19 03:16:06.267449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.816 [2024-11-19 03:16:06.267494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.816 [2024-11-19 03:16:06.280724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.816 [2024-11-19 03:16:06.280938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.816 [2024-11-19 03:16:06.280965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.816 [2024-11-19 03:16:06.294415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.816 [2024-11-19 03:16:06.294712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.816 [2024-11-19 03:16:06.294743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.816 [2024-11-19 03:16:06.308123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.816 [2024-11-19 03:16:06.308379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.816 [2024-11-19 03:16:06.308425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.816 [2024-11-19 03:16:06.321686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.816 [2024-11-19 03:16:06.321919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.816 [2024-11-19 03:16:06.321949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.816 [2024-11-19 03:16:06.335300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.816 [2024-11-19 03:16:06.335532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.816 [2024-11-19 03:16:06.335577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.816 [2024-11-19 03:16:06.349040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.816 [2024-11-19 03:16:06.349233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.816 [2024-11-19 03:16:06.349259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.816 [2024-11-19 03:16:06.362794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.816 [2024-11-19 03:16:06.363060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.816 [2024-11-19 03:16:06.363089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.816 [2024-11-19 03:16:06.376493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.816 [2024-11-19 03:16:06.376780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.816 [2024-11-19 03:16:06.376816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.816 [2024-11-19 03:16:06.390292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.816 [2024-11-19 03:16:06.390483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.816 [2024-11-19 03:16:06.390511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.816 [2024-11-19 03:16:06.403844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.816 [2024-11-19 03:16:06.404089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.816 [2024-11-19 03:16:06.404133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.816 [2024-11-19 03:16:06.417478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.816 [2024-11-19 03:16:06.417715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.816 [2024-11-19 03:16:06.417769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.816 [2024-11-19 03:16:06.431190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:55.816 [2024-11-19 03:16:06.431436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.816 [2024-11-19 03:16:06.431466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.075 [2024-11-19 03:16:06.444734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.075 [2024-11-19 03:16:06.444978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.445022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.458561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.076 [2024-11-19 03:16:06.458812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.458839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.472472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.076 [2024-11-19 03:16:06.472775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.472805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.486376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.076 [2024-11-19 03:16:06.486655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.486705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.500172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.076 [2024-11-19 03:16:06.500470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.500513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.513943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.076 [2024-11-19 03:16:06.514157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.514200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.527811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.076 [2024-11-19 03:16:06.528122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.528166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.541723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.076 [2024-11-19 03:16:06.541975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.542019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.555611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.076 [2024-11-19 03:16:06.555852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.555883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.569442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.076 [2024-11-19 03:16:06.569779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.569808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.583223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.076 [2024-11-19 03:16:06.583540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.583569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.597034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.076 [2024-11-19 03:16:06.597302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.597346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.610835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.076 [2024-11-19 03:16:06.611058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.611099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.624755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.076 [2024-11-19 03:16:06.624982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.625013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.638433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.076 [2024-11-19 03:16:06.638737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.638778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.652245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.076 [2024-11-19 03:16:06.652464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.652490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.666066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.076 [2024-11-19 03:16:06.666289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.666333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.679730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.076 [2024-11-19 03:16:06.679941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.076 [2024-11-19 03:16:06.679985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.076 [2024-11-19 03:16:06.693544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.694256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.694288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 18821.00 IOPS, 73.52 MiB/s [2024-11-19T02:16:06.951Z] [2024-11-19 03:16:06.707079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.707307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.707350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 [2024-11-19 03:16:06.720684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.721026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.721054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 [2024-11-19 03:16:06.734433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.734737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.734775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 [2024-11-19 03:16:06.748220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.748465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.748508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 [2024-11-19 03:16:06.762019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.762302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.762349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 [2024-11-19 03:16:06.775831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.776082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.776127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 [2024-11-19 03:16:06.789532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.789778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.789822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 [2024-11-19 03:16:06.803417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.803656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.803709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 [2024-11-19 03:16:06.817250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.817529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.817572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 [2024-11-19 03:16:06.830594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.830897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.830926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 [2024-11-19 03:16:06.844397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.844669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.844722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 [2024-11-19 03:16:06.858214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.858459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.858502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 [2024-11-19 03:16:06.872061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.872359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.872403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 [2024-11-19 03:16:06.885719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.885968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.886011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 [2024-11-19 03:16:06.899451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.899731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.899760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 [2024-11-19 03:16:06.913220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.913437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.913480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 [2024-11-19 03:16:06.927092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.927379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.927406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.336 [2024-11-19 03:16:06.940839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.336 [2024-11-19 03:16:06.941088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.336 [2024-11-19 03:16:06.941134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.596 [2024-11-19 03:16:06.954346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.596 [2024-11-19 03:16:06.954675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.596 [2024-11-19 03:16:06.954728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.596 [2024-11-19 03:16:06.968065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.596 [2024-11-19 03:16:06.968350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.596 [2024-11-19 03:16:06.968395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.596 [2024-11-19 03:16:06.981832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.596 [2024-11-19 03:16:06.982051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.596 [2024-11-19 03:16:06.982078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.596 [2024-11-19 03:16:06.995697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.596 [2024-11-19 03:16:06.995947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.596 [2024-11-19 03:16:06.995977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.596 [2024-11-19 03:16:07.009505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.596 [2024-11-19 03:16:07.009797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.596 [2024-11-19 03:16:07.009828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.596 [2024-11-19 03:16:07.023114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.596 [2024-11-19 03:16:07.023394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.596 [2024-11-19 03:16:07.023439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.596 [2024-11-19 03:16:07.036936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.596 [2024-11-19 03:16:07.037177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.596 [2024-11-19 03:16:07.037223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.596 [2024-11-19 03:16:07.050793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.596 [2024-11-19 03:16:07.051011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.596 [2024-11-19 03:16:07.051056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.596 [2024-11-19 03:16:07.064554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.596 [2024-11-19 03:16:07.064812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.596 [2024-11-19 03:16:07.064842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.596 [2024-11-19 03:16:07.078309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.596 [2024-11-19 03:16:07.078600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.596 [2024-11-19 03:16:07.078644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.596 [2024-11-19 03:16:07.092069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.596 [2024-11-19 03:16:07.092345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.596 [2024-11-19 03:16:07.092394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.596 [2024-11-19 03:16:07.106302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.596 [2024-11-19 03:16:07.106600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.596 [2024-11-19 03:16:07.106647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.596 [2024-11-19 03:16:07.120317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.596 [2024-11-19 03:16:07.120537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.596 [2024-11-19 03:16:07.120567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.596 [2024-11-19 03:16:07.134148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.597 [2024-11-19 03:16:07.134386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.597 [2024-11-19 03:16:07.134415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.597 [2024-11-19 03:16:07.147851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.597 [2024-11-19 03:16:07.148146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.597 [2024-11-19 03:16:07.148192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.597 [2024-11-19 03:16:07.161557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.597 [2024-11-19 03:16:07.161842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.597 [2024-11-19 03:16:07.161882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.597 [2024-11-19 03:16:07.175268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.597 [2024-11-19 03:16:07.175489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.597 [2024-11-19 03:16:07.175532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.597 [2024-11-19 03:16:07.189115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.597 [2024-11-19 03:16:07.189335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.597 [2024-11-19 03:16:07.189361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.597 [2024-11-19 03:16:07.202890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.597 [2024-11-19 03:16:07.203182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.597 [2024-11-19 03:16:07.203226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.856 [2024-11-19 03:16:07.216410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.856 [2024-11-19 03:16:07.216674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.856 [2024-11-19 03:16:07.216713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.856 [2024-11-19 03:16:07.230169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.230432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.230461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.857 [2024-11-19 03:16:07.243869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.244103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.244131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.857 [2024-11-19 03:16:07.257740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.257955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.257998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.857 [2024-11-19 03:16:07.271640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.271884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.271913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.857 [2024-11-19 03:16:07.285393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.285617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.285662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.857 [2024-11-19 03:16:07.299123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.299385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.299430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.857 [2024-11-19 03:16:07.312873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.313118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.313162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.857 [2024-11-19 03:16:07.326492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.326766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.326796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.857 [2024-11-19 03:16:07.340176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.340404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.340433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.857 [2024-11-19 03:16:07.353928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.354145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.354187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.857 [2024-11-19 03:16:07.367731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.367962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.367990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.857 [2024-11-19 03:16:07.381365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.381579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.381607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.857 [2024-11-19 03:16:07.394932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.395190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.395235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.857 [2024-11-19 03:16:07.408475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.408727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.408757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.857 [2024-11-19 03:16:07.421995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.422202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.422229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.857 [2024-11-19 03:16:07.435746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.435981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.436011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.857 [2024-11-19 03:16:07.449250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.449534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.449585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.857 [2024-11-19 03:16:07.462940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:56.857 [2024-11-19 03:16:07.463181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-19 03:16:07.463211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.116 [2024-11-19 03:16:07.476484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:57.116 [2024-11-19 03:16:07.476745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-19 03:16:07.476776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.116 [2024-11-19 03:16:07.489932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:57.116 [2024-11-19 03:16:07.490198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-19 03:16:07.490228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.116 [2024-11-19 03:16:07.503428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:57.116 [2024-11-19 03:16:07.503724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-19 03:16:07.503755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.116 [2024-11-19 03:16:07.516931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:57.116 [2024-11-19 03:16:07.517239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-19 03:16:07.517282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.116 [2024-11-19 03:16:07.530349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:57.116 [2024-11-19 03:16:07.530583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-19 03:16:07.530610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.116 [2024-11-19 03:16:07.544038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:57.116 [2024-11-19 03:16:07.544272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-19 03:16:07.544299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.116 [2024-11-19 03:16:07.557590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:57.116 [2024-11-19 03:16:07.557844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-19 03:16:07.557872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.116 [2024-11-19 03:16:07.571673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:57.116 [2024-11-19 03:16:07.571950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-19 03:16:07.571987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.116 [2024-11-19 03:16:07.586095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:57.117 [2024-11-19 03:16:07.586324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.117 [2024-11-19 03:16:07.586367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.117 [2024-11-19 03:16:07.600338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:57.117 [2024-11-19 03:16:07.600635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.117 [2024-11-19 03:16:07.600663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.117 [2024-11-19 03:16:07.614540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:57.117 [2024-11-19 03:16:07.614788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.117 [2024-11-19 03:16:07.614818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.117 [2024-11-19 03:16:07.628667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:57.117 [2024-11-19 03:16:07.628963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.117 [2024-11-19 03:16:07.628994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.117 [2024-11-19 03:16:07.643065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:57.117 [2024-11-19 03:16:07.643314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.117 [2024-11-19 03:16:07.643358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.117 [2024-11-19 03:16:07.657041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:57.117 [2024-11-19 03:16:07.657305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.117 [2024-11-19 03:16:07.657348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.117 [2024-11-19 03:16:07.671308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:57.117 [2024-11-19 03:16:07.671608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.117 [2024-11-19 03:16:07.671637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.117 [2024-11-19 03:16:07.685577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:57.117 [2024-11-19 03:16:07.685833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.117 [2024-11-19 03:16:07.685877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.117 18686.00 IOPS, 72.99 MiB/s [2024-11-19T02:16:07.732Z] [2024-11-19 03:16:07.699659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec460) with pdu=0x2000166e99d8 00:34:57.117 [2024-11-19 03:16:07.699909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.117 [2024-11-19 03:16:07.699951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.117 00:34:57.117 Latency(us) 00:34:57.117 [2024-11-19T02:16:07.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.117 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:57.117 nvme0n1 : 2.01 18683.24 72.98 0.00 0.00 6835.29 2669.99 15922.82 00:34:57.117 [2024-11-19T02:16:07.732Z] =================================================================================================================== 00:34:57.117 [2024-11-19T02:16:07.732Z] Total : 18683.24 72.98 0.00 0.00 6835.29 2669.99 15922.82 00:34:57.117 { 00:34:57.117 "results": [ 00:34:57.117 { 00:34:57.117 "job": "nvme0n1", 00:34:57.117 "core_mask": "0x2", 00:34:57.117 "workload": "randwrite", 00:34:57.117 "status": "finished", 00:34:57.117 "queue_depth": 128, 00:34:57.117 "io_size": 4096, 00:34:57.117 "runtime": 2.006718, 00:34:57.117 "iops": 18683.242986807316, 00:34:57.117 "mibps": 72.98141791721608, 00:34:57.117 "io_failed": 0, 00:34:57.117 "io_timeout": 0, 00:34:57.117 "avg_latency_us": 6835.293853819679, 00:34:57.117 "min_latency_us": 2669.9851851851854, 00:34:57.117 "max_latency_us": 15922.82074074074 00:34:57.117 } 00:34:57.117 ], 00:34:57.117 "core_count": 1 00:34:57.117 } 00:34:57.117 03:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:57.117 03:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:57.117 03:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:57.117 03:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:57.117 | .driver_specific 00:34:57.117 | .nvme_error 00:34:57.117 | .status_code 00:34:57.117 | .command_transient_transport_error' 00:34:57.375 03:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 147 > 0 )) 00:34:57.375 03:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 397086 00:34:57.375 03:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 397086 ']' 00:34:57.375 03:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 397086 00:34:57.634 03:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:57.634 03:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:57.634 03:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 397086 00:34:57.634 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:57.634 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:57.634 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 397086' 00:34:57.634 killing process with pid 397086 00:34:57.634 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 397086 00:34:57.634 Received shutdown signal, test time was about 2.000000 seconds 00:34:57.634 00:34:57.634 Latency(us) 00:34:57.634 [2024-11-19T02:16:08.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.634 [2024-11-19T02:16:08.249Z] =================================================================================================================== 00:34:57.634 [2024-11-19T02:16:08.249Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:57.634 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 397086 00:34:57.634 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:57.635 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:57.635 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:57.635 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:57.635 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:57.635 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=397486 00:34:57.635 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:57.635 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 397486 /var/tmp/bperf.sock 00:34:57.635 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 397486 ']' 00:34:57.635 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:57.635 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:57.635 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:57.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:57.635 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:57.635 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:57.894 [2024-11-19 03:16:08.274722] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:34:57.894 [2024-11-19 03:16:08.274820] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397486 ] 00:34:57.894 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:57.894 Zero copy mechanism will not be used. 00:34:57.894 [2024-11-19 03:16:08.339612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.894 [2024-11-19 03:16:08.382250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:57.894 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:57.894 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:57.894 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:57.894 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:58.152 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:58.152 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.152 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:58.411 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.411 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.411 03:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.676 nvme0n1 00:34:58.676 03:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:58.676 03:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.676 03:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:58.676 03:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.676 03:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:58.676 03:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:58.940 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:58.940 Zero copy mechanism will not be used. 00:34:58.940 Running I/O for 2 seconds... 00:34:58.940 [2024-11-19 03:16:09.387735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.387860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.940 [2024-11-19 03:16:09.387901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:58.940 [2024-11-19 03:16:09.393771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.393868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.940 [2024-11-19 03:16:09.393900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:58.940 [2024-11-19 03:16:09.399147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.399246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.940 [2024-11-19 03:16:09.399276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:58.940 [2024-11-19 03:16:09.404630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.404725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.940 [2024-11-19 03:16:09.404753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:58.940 [2024-11-19 03:16:09.409752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.409839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.940 [2024-11-19 03:16:09.409866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:58.940 [2024-11-19 03:16:09.414963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.415048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.940 [2024-11-19 03:16:09.415075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:58.940 [2024-11-19 03:16:09.420162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.420252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.940 [2024-11-19 03:16:09.420288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:58.940 [2024-11-19 03:16:09.425460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.425547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.940 [2024-11-19 03:16:09.425575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:58.940 [2024-11-19 03:16:09.430868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.430955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.940 [2024-11-19 03:16:09.430982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:58.940 [2024-11-19 03:16:09.435902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.435994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.940 [2024-11-19 03:16:09.436032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:58.940 [2024-11-19 03:16:09.441518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.441601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.940 [2024-11-19 03:16:09.441631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:58.940 [2024-11-19 03:16:09.447189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.447272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.940 [2024-11-19 03:16:09.447299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:58.940 [2024-11-19 03:16:09.452284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.452372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.940 [2024-11-19 03:16:09.452400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:58.940 [2024-11-19 03:16:09.457372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.457454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.940 [2024-11-19 03:16:09.457482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:58.940 [2024-11-19 03:16:09.462470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.462574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.940 [2024-11-19 03:16:09.462603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:58.940 [2024-11-19 03:16:09.467625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.467730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.940 [2024-11-19 03:16:09.467768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:58.940 [2024-11-19 03:16:09.473760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.473850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.940 [2024-11-19 03:16:09.473877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:58.940 [2024-11-19 03:16:09.480340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.940 [2024-11-19 03:16:09.480502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.941 [2024-11-19 03:16:09.480532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:58.941 [2024-11-19 03:16:09.486858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.941 [2024-11-19 03:16:09.487033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.941 [2024-11-19 03:16:09.487063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:58.941 [2024-11-19 03:16:09.493110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.941 [2024-11-19 03:16:09.493297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.941 [2024-11-19 03:16:09.493327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:58.941 [2024-11-19 03:16:09.498799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.941 [2024-11-19 03:16:09.498921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.941 [2024-11-19 03:16:09.498951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:58.941 [2024-11-19 03:16:09.503795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.941 [2024-11-19 03:16:09.503900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.941 [2024-11-19 03:16:09.503929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:58.941 [2024-11-19 03:16:09.509316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.941 [2024-11-19 03:16:09.509464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.941 [2024-11-19 03:16:09.509494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:58.941 [2024-11-19 03:16:09.515194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.941 [2024-11-19 03:16:09.515308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.941 [2024-11-19 03:16:09.515336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:58.941 [2024-11-19 03:16:09.520564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.941 [2024-11-19 03:16:09.520654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.941 [2024-11-19 03:16:09.520682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:58.941 [2024-11-19 03:16:09.525629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.941 [2024-11-19 03:16:09.525730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.941 [2024-11-19 03:16:09.525761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:58.941 [2024-11-19 03:16:09.530793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.941 [2024-11-19 03:16:09.530889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.941 [2024-11-19 03:16:09.530916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:58.941 [2024-11-19 03:16:09.536443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.941 [2024-11-19 03:16:09.536611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.941 [2024-11-19 03:16:09.536640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:58.941 [2024-11-19 03:16:09.542831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.941 [2024-11-19 03:16:09.542986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.941 [2024-11-19 03:16:09.543015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:58.941 [2024-11-19 03:16:09.548416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.941 [2024-11-19 03:16:09.548556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.941 [2024-11-19 03:16:09.548586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:58.941 [2024-11-19 03:16:09.553603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:58.941 [2024-11-19 03:16:09.553719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.941 [2024-11-19 03:16:09.553750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.558605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.558705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.558734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.563871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.563975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.564011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.569090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.569211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.569241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.574260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.574363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.574393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.579940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.580056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.580097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.585121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.585242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.585271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.590850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.590951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.590979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.597180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.597338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.597368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.603476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.603626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.603656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.609863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.610050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.610080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.616101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.616242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.616279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.621076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.621171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.621200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.626473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.626620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.626649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.631708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.631841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.631871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.636599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.636698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.636737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.641704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.641814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.641841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.647058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.647196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.647239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.652569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.652725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.652753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.658154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.658284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.658311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.663512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.663613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.663656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.668577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.668665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.668703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.673895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.674022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.674050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.679909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.680080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.680108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.686255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.686329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.686358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.693357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.693481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.693509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.699900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.700055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.700083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.706461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.706648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.201 [2024-11-19 03:16:09.706699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.201 [2024-11-19 03:16:09.712257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.201 [2024-11-19 03:16:09.712378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.712406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.717332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.717456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.717499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.722702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.722805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.722833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.728449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.728522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.728550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.734932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.735050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.735078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.742179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.742312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.742340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.747781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.747885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.747914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.752785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.752923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.752951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.757827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.757916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.757945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.762870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.762951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.762984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.768203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.768360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.768388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.773400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.773540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.773568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.778960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.779090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.779117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.784583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.784660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.784696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.789328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.789410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.789438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.794221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.794292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.794320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.798947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.799035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.799063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.803670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.803761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.803788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.808308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.808404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.808432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.813170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.813265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.813292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.202 [2024-11-19 03:16:09.817923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.202 [2024-11-19 03:16:09.818002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.202 [2024-11-19 03:16:09.818031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.462 [2024-11-19 03:16:09.822752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.462 [2024-11-19 03:16:09.822833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.462 [2024-11-19 03:16:09.822862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.462 [2024-11-19 03:16:09.827560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.462 [2024-11-19 03:16:09.827638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.462 [2024-11-19 03:16:09.827666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.462 [2024-11-19 03:16:09.832336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.462 [2024-11-19 03:16:09.832424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.462 [2024-11-19 03:16:09.832452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.462 [2024-11-19 03:16:09.837009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.462 [2024-11-19 03:16:09.837108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.462 [2024-11-19 03:16:09.837136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.462 [2024-11-19 03:16:09.841925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.462 [2024-11-19 03:16:09.842073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.462 [2024-11-19 03:16:09.842102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.462 [2024-11-19 03:16:09.847807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.462 [2024-11-19 03:16:09.847987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.462 [2024-11-19 03:16:09.848027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.462 [2024-11-19 03:16:09.853531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.462 [2024-11-19 03:16:09.853621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.462 [2024-11-19 03:16:09.853648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.859671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.859791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.859819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.864392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.864495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.864523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.869108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.869191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.869220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.873816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.873895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.873922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.878599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.878684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.878718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.883605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.883701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.883729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.888362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.888439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.888466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.893433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.893505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.893538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.898546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.898622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.898649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.903960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.904032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.904059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.908640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.908763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.908791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.913520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.913604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.913632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.918361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.918459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.918487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.923400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.923480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.923507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.928018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.928096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.928123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.933247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.933355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.933383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.939377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.939558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.939586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.945519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.945678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.945717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.952073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.952179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.952207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.956935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.957013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.957040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.961814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.961916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.961946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.966894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.967036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.967066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.972101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.972262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.972292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.977325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.977481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.977510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.982518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.982642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.982672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.987740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.987851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.987879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.992966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.993092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.993120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:09.998198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:09.998318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:09.998346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:10.003380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:10.003524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:10.003555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:10.008650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:10.008795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.463 [2024-11-19 03:16:10.008824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.463 [2024-11-19 03:16:10.013900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.463 [2024-11-19 03:16:10.014031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.464 [2024-11-19 03:16:10.014059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.464 [2024-11-19 03:16:10.019098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.464 [2024-11-19 03:16:10.019206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.464 [2024-11-19 03:16:10.019245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.464 [2024-11-19 03:16:10.024214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.464 [2024-11-19 03:16:10.024311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.464 [2024-11-19 03:16:10.024340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.464 [2024-11-19 03:16:10.029309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.464 [2024-11-19 03:16:10.029441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.464 [2024-11-19 03:16:10.029479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.464 [2024-11-19 03:16:10.034369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.464 [2024-11-19 03:16:10.034486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.464 [2024-11-19 03:16:10.034515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.464 [2024-11-19 03:16:10.039649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.464 [2024-11-19 03:16:10.039771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.464 [2024-11-19 03:16:10.039799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.464 [2024-11-19 03:16:10.044575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.464 [2024-11-19 03:16:10.044742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.464 [2024-11-19 03:16:10.044770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.464 [2024-11-19 03:16:10.049310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.464 [2024-11-19 03:16:10.049412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.464 [2024-11-19 03:16:10.049440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.464 [2024-11-19 03:16:10.054398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.464 [2024-11-19 03:16:10.054581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.464 [2024-11-19 03:16:10.054611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.464 [2024-11-19 03:16:10.060391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.464 [2024-11-19 03:16:10.060587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.464 [2024-11-19 03:16:10.060618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.464 [2024-11-19 03:16:10.065639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.464 [2024-11-19 03:16:10.065791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.464 [2024-11-19 03:16:10.065819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.464 [2024-11-19 03:16:10.071623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.464 [2024-11-19 03:16:10.071774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.464 [2024-11-19 03:16:10.071813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.464 [2024-11-19 03:16:10.077482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.464 [2024-11-19 03:16:10.077577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.464 [2024-11-19 03:16:10.077606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.083827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.083978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.084007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.090721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.090940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.090970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.097408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.097609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.097637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.103367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.103537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.103565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.108548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.108742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.108770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.114672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.114873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.114901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.120660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.120830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.120859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.127142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.127223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.127251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.133308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.133521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.133549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.139399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.139573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.139601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.145496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.145698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.145727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.151704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.151925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.151953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.157789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.157940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.157968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.163911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.164057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.164085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.170712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.170879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.170907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.176146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.176295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.176322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.181016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.181150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.181183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.186079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.186208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.186250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.191732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.191813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.724 [2024-11-19 03:16:10.191840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.724 [2024-11-19 03:16:10.197124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.724 [2024-11-19 03:16:10.197194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.197222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.202353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.202448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.202475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.207213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.207289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.207317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.212511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.212588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.212616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.217363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.217437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.217465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.222002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.222087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.222114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.226644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.226784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.226814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.231991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.232182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.232211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.238053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.238243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.238272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.244126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.244275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.244305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.250772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.250918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.250949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.257045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.257191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.257221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.263915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.263991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.264021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.269956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.270073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.270101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.274656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.274751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.274780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.279334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.279418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.279446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.284006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.284082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.284110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.288646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.288749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.288777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.293347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.293431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.293461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.298019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.298120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.298163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.302696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.302792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.302820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.307331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.307488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.307516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.312350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.312511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.312541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.318333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.318525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.318561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.323609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.323733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.323761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.329698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.329798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.329826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.335092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.725 [2024-11-19 03:16:10.335244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.725 [2024-11-19 03:16:10.335275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.725 [2024-11-19 03:16:10.340172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.726 [2024-11-19 03:16:10.340290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.726 [2024-11-19 03:16:10.340319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.985 [2024-11-19 03:16:10.344944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.985 [2024-11-19 03:16:10.345085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.985 [2024-11-19 03:16:10.345113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.985 [2024-11-19 03:16:10.349580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.985 [2024-11-19 03:16:10.349713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.985 [2024-11-19 03:16:10.349750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.985 [2024-11-19 03:16:10.354412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.985 [2024-11-19 03:16:10.354553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.985 [2024-11-19 03:16:10.354596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.985 [2024-11-19 03:16:10.359258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.985 [2024-11-19 03:16:10.359410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.985 [2024-11-19 03:16:10.359438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.985 [2024-11-19 03:16:10.364280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.985 [2024-11-19 03:16:10.364389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.985 [2024-11-19 03:16:10.364416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.985 [2024-11-19 03:16:10.370592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.985 [2024-11-19 03:16:10.370719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.985 [2024-11-19 03:16:10.370747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.985 [2024-11-19 03:16:10.375446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.985 [2024-11-19 03:16:10.375519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.985 [2024-11-19 03:16:10.375548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.985 [2024-11-19 03:16:10.380119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.985 [2024-11-19 03:16:10.380216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.985 [2024-11-19 03:16:10.380245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.985 5703.00 IOPS, 712.88 MiB/s [2024-11-19T02:16:10.600Z] [2024-11-19 03:16:10.386566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.985 [2024-11-19 03:16:10.386653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.985 [2024-11-19 03:16:10.386682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.985 [2024-11-19 03:16:10.391481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.985 [2024-11-19 03:16:10.391576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.985 [2024-11-19 03:16:10.391604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.985 [2024-11-19 03:16:10.396726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.985 [2024-11-19 03:16:10.396835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.985 [2024-11-19 03:16:10.396864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.985 [2024-11-19 03:16:10.401920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.985 [2024-11-19 03:16:10.402009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.985 [2024-11-19 03:16:10.402037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.985 [2024-11-19 03:16:10.406924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.985 [2024-11-19 03:16:10.407023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.985 [2024-11-19 03:16:10.407051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.985 [2024-11-19 03:16:10.411901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.985 [2024-11-19 03:16:10.411983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.985 [2024-11-19 03:16:10.412012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.985 [2024-11-19 03:16:10.417481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.985 [2024-11-19 03:16:10.417553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.985 [2024-11-19 03:16:10.417581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.985 [2024-11-19 03:16:10.423155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.985 [2024-11-19 03:16:10.423234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.985 [2024-11-19 03:16:10.423262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.985 [2024-11-19 03:16:10.428206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.985 [2024-11-19 03:16:10.428295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.428324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.433135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.433209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.433236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.438024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.438103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.438131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.443008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.443103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.443131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.448137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.448229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.448257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.453156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.453239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.453284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.458335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.458428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.458457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.464187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.464257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.464285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.469259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.469337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.469365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.474342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.474421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.474449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.479574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.479710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.479752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.485015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.485096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.485125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.490128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.490204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.490232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.495241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.495313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.495342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.500267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.500356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.500384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.505474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.505552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.505580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.510550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.510619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.510647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.515682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.515770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.515799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.520707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.520779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.520808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.525750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.525828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.525856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.530913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.531035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.531063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.536508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.536695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.536724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.542891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.543107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.543138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.550211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.550387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.550415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.557766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.557903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.557930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.564956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.565094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.565124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.571311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.571504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.986 [2024-11-19 03:16:10.571533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:59.986 [2024-11-19 03:16:10.577655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.986 [2024-11-19 03:16:10.577841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.987 [2024-11-19 03:16:10.577870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.987 [2024-11-19 03:16:10.584051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.987 [2024-11-19 03:16:10.584160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.987 [2024-11-19 03:16:10.584188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:59.987 [2024-11-19 03:16:10.590456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.987 [2024-11-19 03:16:10.590641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.987 [2024-11-19 03:16:10.590670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:59.987 [2024-11-19 03:16:10.596727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:34:59.987 [2024-11-19 03:16:10.596915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.987 [2024-11-19 03:16:10.596945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.247 [2024-11-19 03:16:10.603093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.247 [2024-11-19 03:16:10.603268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.247 [2024-11-19 03:16:10.603303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.247 [2024-11-19 03:16:10.609730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.247 [2024-11-19 03:16:10.609901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.247 [2024-11-19 03:16:10.609930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.247 [2024-11-19 03:16:10.617005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.247 [2024-11-19 03:16:10.617187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.247 [2024-11-19 03:16:10.617216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.247 [2024-11-19 03:16:10.623927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.247 [2024-11-19 03:16:10.624100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.247 [2024-11-19 03:16:10.624128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.247 [2024-11-19 03:16:10.631447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.247 [2024-11-19 03:16:10.631618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.247 [2024-11-19 03:16:10.631646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.247 [2024-11-19 03:16:10.638334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.247 [2024-11-19 03:16:10.638407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.247 [2024-11-19 03:16:10.638435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.247 [2024-11-19 03:16:10.643947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.247 [2024-11-19 03:16:10.644027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.247 [2024-11-19 03:16:10.644054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.247 [2024-11-19 03:16:10.649004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.247 [2024-11-19 03:16:10.649105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.247 [2024-11-19 03:16:10.649134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.247 [2024-11-19 03:16:10.654025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.247 [2024-11-19 03:16:10.654100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.247 [2024-11-19 03:16:10.654128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.247 [2024-11-19 03:16:10.658948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.247 [2024-11-19 03:16:10.659033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.247 [2024-11-19 03:16:10.659067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.247 [2024-11-19 03:16:10.664018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.247 [2024-11-19 03:16:10.664130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.247 [2024-11-19 03:16:10.664158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.247 [2024-11-19 03:16:10.668944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.247 [2024-11-19 03:16:10.669025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.247 [2024-11-19 03:16:10.669053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.247 [2024-11-19 03:16:10.673994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.247 [2024-11-19 03:16:10.674066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.674094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.679021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.679103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.679131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.684042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.684128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.684156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.688995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.689073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.689101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.693972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.694066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.694095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.699082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.699190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.699218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.704079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.704174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.704202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.709069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.709165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.709193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.714352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.714431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.714459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.719988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.720089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.720117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.724942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.725025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.725052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.730039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.730128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.730157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.735001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.735084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.735112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.739968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.740055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.740083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.745096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.745219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.745247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.751290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.751478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.751506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.756710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.756790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.756819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.761560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.761631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.761660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.766546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.766631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.766660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.771631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.771719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.771748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.776687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.776767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.776795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.781686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.781777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.781806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.786648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.786727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.786756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.791685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.791789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.791824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.796678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.796857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.796884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.802542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.802660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.802695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.807306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.807377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.807405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.812218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.248 [2024-11-19 03:16:10.812298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.248 [2024-11-19 03:16:10.812326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.248 [2024-11-19 03:16:10.817111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.249 [2024-11-19 03:16:10.817190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.249 [2024-11-19 03:16:10.817218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.249 [2024-11-19 03:16:10.822017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.249 [2024-11-19 03:16:10.822098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.249 [2024-11-19 03:16:10.822127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.249 [2024-11-19 03:16:10.827191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.249 [2024-11-19 03:16:10.827324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.249 [2024-11-19 03:16:10.827367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.249 [2024-11-19 03:16:10.832736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.249 [2024-11-19 03:16:10.832917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.249 [2024-11-19 03:16:10.832945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.249 [2024-11-19 03:16:10.838922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.249 [2024-11-19 03:16:10.839129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.249 [2024-11-19 03:16:10.839158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.249 [2024-11-19 03:16:10.845273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.249 [2024-11-19 03:16:10.845481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.249 [2024-11-19 03:16:10.845510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.249 [2024-11-19 03:16:10.851921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.249 [2024-11-19 03:16:10.851997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.249 [2024-11-19 03:16:10.852027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.249 [2024-11-19 03:16:10.859248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.249 [2024-11-19 03:16:10.859460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.249 [2024-11-19 03:16:10.859490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.510 [2024-11-19 03:16:10.865717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.510 [2024-11-19 03:16:10.865874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.510 [2024-11-19 03:16:10.865905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.510 [2024-11-19 03:16:10.870844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.510 [2024-11-19 03:16:10.870983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.510 [2024-11-19 03:16:10.871013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.510 [2024-11-19 03:16:10.875174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.510 [2024-11-19 03:16:10.875282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.510 [2024-11-19 03:16:10.875312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.510 [2024-11-19 03:16:10.879499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.510 [2024-11-19 03:16:10.879607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.510 [2024-11-19 03:16:10.879635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.510 [2024-11-19 03:16:10.883818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.510 [2024-11-19 03:16:10.883948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.510 [2024-11-19 03:16:10.883977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.510 [2024-11-19 03:16:10.888295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.510 [2024-11-19 03:16:10.888447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.510 [2024-11-19 03:16:10.888476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.510 [2024-11-19 03:16:10.893648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.510 [2024-11-19 03:16:10.893751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.510 [2024-11-19 03:16:10.893780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.510 [2024-11-19 03:16:10.898826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.510 [2024-11-19 03:16:10.898956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.510 [2024-11-19 03:16:10.898984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.510 [2024-11-19 03:16:10.903604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.510 [2024-11-19 03:16:10.903727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.510 [2024-11-19 03:16:10.903756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.510 [2024-11-19 03:16:10.908239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.510 [2024-11-19 03:16:10.908333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.510 [2024-11-19 03:16:10.908361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.510 [2024-11-19 03:16:10.912797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.510 [2024-11-19 03:16:10.912910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.510 [2024-11-19 03:16:10.912939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.510 [2024-11-19 03:16:10.917069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.510 [2024-11-19 03:16:10.917157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:10.917186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:10.921698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:10.921794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:10.921823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:10.926304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:10.926414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:10.926449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:10.930782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:10.930885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:10.930913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:10.935553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:10.935654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:10.935682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:10.940112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:10.940217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:10.940245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:10.944712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:10.944803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:10.944831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:10.949205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:10.949310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:10.949339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:10.953747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:10.953842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:10.953871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:10.958531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:10.958676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:10.958713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:10.964229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:10.964382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:10.964411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:10.969672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:10.969831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:10.969859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:10.975063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:10.975259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:10.975287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:10.981298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:10.981534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:10.981565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:10.987461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:10.987623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:10.987651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:10.993581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:10.993737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:10.993766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:10.999868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:11.000004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:11.000033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:11.006080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:11.006246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:11.006276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:11.011305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:11.011463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:11.011491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:11.016623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:11.016806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:11.016835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:11.021947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:11.022115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:11.022144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:11.027216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:11.027450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:11.027481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:11.032675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:11.032916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:11.032948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:11.038914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:11.039130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:11.039160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:11.044168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:11.044374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:11.044404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:11.049588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:11.049796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:11.049827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:11.054888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:11.055032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.511 [2024-11-19 03:16:11.055062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.511 [2024-11-19 03:16:11.060134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.511 [2024-11-19 03:16:11.060353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.512 [2024-11-19 03:16:11.060384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.512 [2024-11-19 03:16:11.065841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.512 [2024-11-19 03:16:11.065947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.512 [2024-11-19 03:16:11.065982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.512 [2024-11-19 03:16:11.071785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.512 [2024-11-19 03:16:11.072019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.512 [2024-11-19 03:16:11.072050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.512 [2024-11-19 03:16:11.077734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.512 [2024-11-19 03:16:11.077828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.512 [2024-11-19 03:16:11.077857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.512 [2024-11-19 03:16:11.082972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.512 [2024-11-19 03:16:11.083062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.512 [2024-11-19 03:16:11.083091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.512 [2024-11-19 03:16:11.087625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.512 [2024-11-19 03:16:11.087735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.512 [2024-11-19 03:16:11.087764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.512 [2024-11-19 03:16:11.092233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.512 [2024-11-19 03:16:11.092328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.512 [2024-11-19 03:16:11.092356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.512 [2024-11-19 03:16:11.097639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.512 [2024-11-19 03:16:11.097827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.512 [2024-11-19 03:16:11.097858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.512 [2024-11-19 03:16:11.103194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.512 [2024-11-19 03:16:11.103423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.512 [2024-11-19 03:16:11.103453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.512 [2024-11-19 03:16:11.109260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.512 [2024-11-19 03:16:11.109405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.512 [2024-11-19 03:16:11.109435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.512 [2024-11-19 03:16:11.114659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.512 [2024-11-19 03:16:11.114861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.512 [2024-11-19 03:16:11.114891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.512 [2024-11-19 03:16:11.119958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.512 [2024-11-19 03:16:11.120091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.512 [2024-11-19 03:16:11.120121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.512 [2024-11-19 03:16:11.125389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.512 [2024-11-19 03:16:11.125522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.512 [2024-11-19 03:16:11.125554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.773 [2024-11-19 03:16:11.130701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.773 [2024-11-19 03:16:11.130878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.773 [2024-11-19 03:16:11.130909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.773 [2024-11-19 03:16:11.135983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.773 [2024-11-19 03:16:11.136214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.773 [2024-11-19 03:16:11.136245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.773 [2024-11-19 03:16:11.141180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.773 [2024-11-19 03:16:11.141416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.773 [2024-11-19 03:16:11.141449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.773 [2024-11-19 03:16:11.146536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.773 [2024-11-19 03:16:11.146761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.773 [2024-11-19 03:16:11.146792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.773 [2024-11-19 03:16:11.152051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.773 [2024-11-19 03:16:11.152216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.773 [2024-11-19 03:16:11.152246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.773 [2024-11-19 03:16:11.157408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.773 [2024-11-19 03:16:11.157574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.773 [2024-11-19 03:16:11.157603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.773 [2024-11-19 03:16:11.162793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.773 [2024-11-19 03:16:11.163018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.773 [2024-11-19 03:16:11.163050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.773 [2024-11-19 03:16:11.168109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.773 [2024-11-19 03:16:11.168351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.773 [2024-11-19 03:16:11.168382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.773 [2024-11-19 03:16:11.173442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.773 [2024-11-19 03:16:11.173681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.773 [2024-11-19 03:16:11.173742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.773 [2024-11-19 03:16:11.178633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.773 [2024-11-19 03:16:11.178774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.773 [2024-11-19 03:16:11.178802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.773 [2024-11-19 03:16:11.183905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.773 [2024-11-19 03:16:11.184102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.773 [2024-11-19 03:16:11.184133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.188874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.189001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.189029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.193807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.193984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.194012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.199554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.199710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.199739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.205263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.205450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.205489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.210006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.210142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.210170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.214193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.214344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.214388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.218827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.218965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.218993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.223353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.223489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.223518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.228739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.228840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.228868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.233494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.233610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.233638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.238218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.238333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.238361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.242923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.243065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.243109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.248478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.248670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.248709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.254425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.254657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.254695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.260542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.260728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.260756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.265836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.265945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.265974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.271050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.271161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.271189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.275506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.275641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.275669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.279968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.280129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.280157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.284250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.284368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.284396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.288578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.288723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.288751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.292955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.293063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.293091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.297893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.298017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.298045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.302632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.302732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.302760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.307218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.307288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.307316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.311867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.311946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.774 [2024-11-19 03:16:11.311974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.774 [2024-11-19 03:16:11.316282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.774 [2024-11-19 03:16:11.316414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.775 [2024-11-19 03:16:11.316441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.775 [2024-11-19 03:16:11.320482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.775 [2024-11-19 03:16:11.320610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.775 [2024-11-19 03:16:11.320638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.775 [2024-11-19 03:16:11.325034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.775 [2024-11-19 03:16:11.325143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.775 [2024-11-19 03:16:11.325171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.775 [2024-11-19 03:16:11.329678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.775 [2024-11-19 03:16:11.329779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.775 [2024-11-19 03:16:11.329814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.775 [2024-11-19 03:16:11.333882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.775 [2024-11-19 03:16:11.333976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.775 [2024-11-19 03:16:11.334004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.775 [2024-11-19 03:16:11.338060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.775 [2024-11-19 03:16:11.338186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.775 [2024-11-19 03:16:11.338213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.775 [2024-11-19 03:16:11.342274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.775 [2024-11-19 03:16:11.342355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.775 [2024-11-19 03:16:11.342382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.775 [2024-11-19 03:16:11.346460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.775 [2024-11-19 03:16:11.346544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.775 [2024-11-19 03:16:11.346571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.775 [2024-11-19 03:16:11.350708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.775 [2024-11-19 03:16:11.350804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.775 [2024-11-19 03:16:11.350831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.775 [2024-11-19 03:16:11.354935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.775 [2024-11-19 03:16:11.355033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.775 [2024-11-19 03:16:11.355060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.775 [2024-11-19 03:16:11.359219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.775 [2024-11-19 03:16:11.359349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.775 [2024-11-19 03:16:11.359379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.775 [2024-11-19 03:16:11.363717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.775 [2024-11-19 03:16:11.363844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.775 [2024-11-19 03:16:11.363874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.775 [2024-11-19 03:16:11.368846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.775 [2024-11-19 03:16:11.369051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.775 [2024-11-19 03:16:11.369081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.775 [2024-11-19 03:16:11.374348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.775 [2024-11-19 03:16:11.374531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.775 [2024-11-19 03:16:11.374561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.775 [2024-11-19 03:16:11.380121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.775 [2024-11-19 03:16:11.380265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.775 [2024-11-19 03:16:11.380302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.775 [2024-11-19 03:16:11.385312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbec7a0) with pdu=0x2000166fef90 00:35:00.775 [2024-11-19 03:16:11.385473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.775 [2024-11-19 03:16:11.385503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:01.034 5802.00 IOPS, 725.25 MiB/s 00:35:01.034 Latency(us) 00:35:01.034 [2024-11-19T02:16:11.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.034 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:01.035 nvme0n1 : 2.00 5800.26 725.03 0.00 0.00 2751.40 2002.49 9223.59 00:35:01.035 [2024-11-19T02:16:11.650Z] =================================================================================================================== 00:35:01.035 [2024-11-19T02:16:11.650Z] Total : 5800.26 725.03 0.00 0.00 2751.40 2002.49 9223.59 00:35:01.035 { 00:35:01.035 "results": [ 00:35:01.035 { 00:35:01.035 "job": "nvme0n1", 00:35:01.035 "core_mask": "0x2", 00:35:01.035 "workload": "randwrite", 00:35:01.035 "status": "finished", 00:35:01.035 "queue_depth": 16, 00:35:01.035 "io_size": 131072, 00:35:01.035 "runtime": 2.004049, 00:35:01.035 "iops": 5800.257378936343, 00:35:01.035 "mibps": 725.0321723670429, 00:35:01.035 "io_failed": 0, 00:35:01.035 "io_timeout": 0, 00:35:01.035 "avg_latency_us": 2751.3988101246464, 00:35:01.035 "min_latency_us": 2002.4888888888888, 00:35:01.035 "max_latency_us": 9223.585185185186 00:35:01.035 } 00:35:01.035 ], 00:35:01.035 "core_count": 1 00:35:01.035 } 00:35:01.035 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:01.035 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:01.035 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:01.035 | .driver_specific 00:35:01.035 | .nvme_error 00:35:01.035 | .status_code 00:35:01.035 | .command_transient_transport_error' 00:35:01.035 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:01.294 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 375 > 0 )) 00:35:01.294 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 397486 00:35:01.294 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 397486 ']' 00:35:01.294 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 397486 00:35:01.294 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:01.294 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:01.294 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 397486 00:35:01.294 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:01.294 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:01.294 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 397486' 00:35:01.294 killing process with pid 397486 00:35:01.294 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 397486 00:35:01.294 Received shutdown signal, test time was about 2.000000 seconds 00:35:01.294 00:35:01.294 Latency(us) 00:35:01.294 [2024-11-19T02:16:11.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.294 [2024-11-19T02:16:11.909Z] =================================================================================================================== 00:35:01.294 [2024-11-19T02:16:11.909Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:01.294 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 397486 00:35:01.555 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 396125 00:35:01.555 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 396125 ']' 00:35:01.555 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 396125 00:35:01.555 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:01.555 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:01.555 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396125 00:35:01.555 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:01.555 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:01.555 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396125' 00:35:01.555 killing process with pid 396125 00:35:01.555 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 396125 00:35:01.555 03:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 396125 00:35:01.555 00:35:01.555 real 0m15.130s 00:35:01.555 user 0m30.443s 00:35:01.555 sys 0m4.222s 00:35:01.555 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:01.555 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:01.555 ************************************ 00:35:01.555 END TEST nvmf_digest_error 00:35:01.555 ************************************ 00:35:01.821 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:01.822 rmmod nvme_tcp 00:35:01.822 rmmod nvme_fabrics 00:35:01.822 rmmod nvme_keyring 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 396125 ']' 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 396125 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 396125 ']' 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 396125 00:35:01.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (396125) - No such process 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 396125 is not found' 00:35:01.822 Process with pid 396125 is not found 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:01.822 03:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.739 03:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:03.739 00:35:03.739 real 0m35.763s 00:35:03.739 user 1m3.060s 00:35:03.739 sys 0m10.372s 00:35:03.739 03:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:03.739 03:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:03.739 ************************************ 00:35:03.739 END TEST nvmf_digest 00:35:03.739 ************************************ 00:35:03.739 03:16:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:03.739 03:16:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:03.739 03:16:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:03.739 03:16:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:03.739 03:16:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:03.739 03:16:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:03.739 03:16:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.739 ************************************ 00:35:03.739 START TEST nvmf_bdevperf 00:35:03.739 ************************************ 00:35:03.739 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:04.000 * Looking for test storage... 00:35:04.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:04.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.000 --rc genhtml_branch_coverage=1 00:35:04.000 --rc genhtml_function_coverage=1 00:35:04.000 --rc genhtml_legend=1 00:35:04.000 --rc geninfo_all_blocks=1 00:35:04.000 --rc geninfo_unexecuted_blocks=1 00:35:04.000 00:35:04.000 ' 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:04.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.000 --rc genhtml_branch_coverage=1 00:35:04.000 --rc genhtml_function_coverage=1 00:35:04.000 --rc genhtml_legend=1 00:35:04.000 --rc geninfo_all_blocks=1 00:35:04.000 --rc geninfo_unexecuted_blocks=1 00:35:04.000 00:35:04.000 ' 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:04.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.000 --rc genhtml_branch_coverage=1 00:35:04.000 --rc genhtml_function_coverage=1 00:35:04.000 --rc genhtml_legend=1 00:35:04.000 --rc geninfo_all_blocks=1 00:35:04.000 --rc geninfo_unexecuted_blocks=1 00:35:04.000 00:35:04.000 ' 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:04.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.000 --rc genhtml_branch_coverage=1 00:35:04.000 --rc genhtml_function_coverage=1 00:35:04.000 --rc genhtml_legend=1 00:35:04.000 --rc geninfo_all_blocks=1 00:35:04.000 --rc geninfo_unexecuted_blocks=1 00:35:04.000 00:35:04.000 ' 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:04.000 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:04.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:04.001 03:16:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:06.539 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:06.539 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:06.539 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:06.539 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:06.539 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:06.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:06.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:35:06.540 00:35:06.540 --- 10.0.0.2 ping statistics --- 00:35:06.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.540 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:06.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:06.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:35:06.540 00:35:06.540 --- 10.0.0.1 ping statistics --- 00:35:06.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.540 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=399851 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 399851 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 399851 ']' 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:06.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:06.540 03:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:06.540 [2024-11-19 03:16:16.834454] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:06.540 [2024-11-19 03:16:16.834524] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:06.540 [2024-11-19 03:16:16.908228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:06.540 [2024-11-19 03:16:16.956822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:06.540 [2024-11-19 03:16:16.956887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:06.540 [2024-11-19 03:16:16.956901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:06.540 [2024-11-19 03:16:16.956917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:06.540 [2024-11-19 03:16:16.956928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:06.540 [2024-11-19 03:16:16.958480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:06.540 [2024-11-19 03:16:16.958542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:06.540 [2024-11-19 03:16:16.958545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:06.540 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:06.540 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:06.540 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:06.540 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:06.540 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:06.540 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:06.540 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:06.540 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.540 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:06.540 [2024-11-19 03:16:17.111248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:06.540 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.540 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:06.540 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.540 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:06.540 Malloc0 00:35:06.540 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.540 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:06.540 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.540 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:06.799 [2024-11-19 03:16:17.171077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:06.799 { 00:35:06.799 "params": { 00:35:06.799 "name": "Nvme$subsystem", 00:35:06.799 "trtype": "$TEST_TRANSPORT", 00:35:06.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:06.799 "adrfam": "ipv4", 00:35:06.799 "trsvcid": "$NVMF_PORT", 00:35:06.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:06.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:06.799 "hdgst": ${hdgst:-false}, 00:35:06.799 "ddgst": ${ddgst:-false} 00:35:06.799 }, 00:35:06.799 "method": "bdev_nvme_attach_controller" 00:35:06.799 } 00:35:06.799 EOF 00:35:06.799 )") 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:06.799 03:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:06.799 "params": { 00:35:06.799 "name": "Nvme1", 00:35:06.799 "trtype": "tcp", 00:35:06.799 "traddr": "10.0.0.2", 00:35:06.799 "adrfam": "ipv4", 00:35:06.799 "trsvcid": "4420", 00:35:06.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:06.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:06.799 "hdgst": false, 00:35:06.799 "ddgst": false 00:35:06.799 }, 00:35:06.799 "method": "bdev_nvme_attach_controller" 00:35:06.799 }' 00:35:06.799 [2024-11-19 03:16:17.225563] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:06.799 [2024-11-19 03:16:17.225639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399925 ] 00:35:06.799 [2024-11-19 03:16:17.296636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.799 [2024-11-19 03:16:17.345510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:07.058 Running I/O for 1 seconds... 00:35:08.433 8511.00 IOPS, 33.25 MiB/s 00:35:08.433 Latency(us) 00:35:08.433 [2024-11-19T02:16:19.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.433 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:08.433 Verification LBA range: start 0x0 length 0x4000 00:35:08.433 Nvme1n1 : 1.01 8597.17 33.58 0.00 0.00 14795.10 1796.17 15243.19 00:35:08.433 [2024-11-19T02:16:19.048Z] =================================================================================================================== 00:35:08.433 [2024-11-19T02:16:19.048Z] Total : 8597.17 33.58 0.00 0.00 14795.10 1796.17 15243.19 00:35:08.433 03:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=400135 00:35:08.433 03:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:08.433 03:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:08.433 03:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:08.433 03:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:08.433 03:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:08.433 03:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:08.433 03:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:08.433 { 00:35:08.433 "params": { 00:35:08.433 "name": "Nvme$subsystem", 00:35:08.433 "trtype": "$TEST_TRANSPORT", 00:35:08.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:08.433 "adrfam": "ipv4", 00:35:08.433 "trsvcid": "$NVMF_PORT", 00:35:08.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:08.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:08.433 "hdgst": ${hdgst:-false}, 00:35:08.433 "ddgst": ${ddgst:-false} 00:35:08.433 }, 00:35:08.433 "method": "bdev_nvme_attach_controller" 00:35:08.433 } 00:35:08.433 EOF 00:35:08.433 )") 00:35:08.433 03:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:08.433 03:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:08.433 03:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:08.433 03:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:08.433 "params": { 00:35:08.433 "name": "Nvme1", 00:35:08.433 "trtype": "tcp", 00:35:08.433 "traddr": "10.0.0.2", 00:35:08.433 "adrfam": "ipv4", 00:35:08.433 "trsvcid": "4420", 00:35:08.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:08.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:08.433 "hdgst": false, 00:35:08.433 "ddgst": false 00:35:08.433 }, 00:35:08.433 "method": "bdev_nvme_attach_controller" 00:35:08.433 }' 00:35:08.433 [2024-11-19 03:16:18.868569] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:08.433 [2024-11-19 03:16:18.868659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400135 ] 00:35:08.433 [2024-11-19 03:16:18.936310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.433 [2024-11-19 03:16:18.982547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.999 Running I/O for 15 seconds... 00:35:10.868 8645.00 IOPS, 33.77 MiB/s [2024-11-19T02:16:22.051Z] 8652.50 IOPS, 33.80 MiB/s [2024-11-19T02:16:22.051Z] 03:16:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 399851 00:35:11.436 03:16:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:11.436 [2024-11-19 03:16:21.834821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.436 [2024-11-19 03:16:21.834871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.834903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.436 [2024-11-19 03:16:21.834921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.834940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.436 [2024-11-19 03:16:21.834956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.834974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.436 [2024-11-19 03:16:21.834990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.436 [2024-11-19 03:16:21.835040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.436 [2024-11-19 03:16:21.835087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.436 [2024-11-19 03:16:21.835118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.436 [2024-11-19 03:16:21.835169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.436 [2024-11-19 03:16:21.835215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.436 [2024-11-19 03:16:21.835250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.436 [2024-11-19 03:16:21.835290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.436 [2024-11-19 03:16:21.835322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.436 [2024-11-19 03:16:21.835354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.436 [2024-11-19 03:16:21.835385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.436 [2024-11-19 03:16:21.835415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.835441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.835482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.835510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.835536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.835562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.835589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.835615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.436 [2024-11-19 03:16:21.835644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.835686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.835728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.835759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.835789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.835817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.835848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.835876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.835905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.835935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.835964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.835995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.836010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.436 [2024-11-19 03:16:21.836025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.436 [2024-11-19 03:16:21.836038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.836870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.437 [2024-11-19 03:16:21.836899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.437 [2024-11-19 03:16:21.836928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.437 [2024-11-19 03:16:21.836959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.836975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.437 [2024-11-19 03:16:21.837002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.837016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.437 [2024-11-19 03:16:21.837029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.837042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.437 [2024-11-19 03:16:21.837068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.837081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.437 [2024-11-19 03:16:21.837093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.837106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.437 [2024-11-19 03:16:21.837118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.837131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.437 [2024-11-19 03:16:21.837142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.837155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.837167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.837184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.437 [2024-11-19 03:16:21.837197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.437 [2024-11-19 03:16:21.837210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.837969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.837996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.838011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.838022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.838036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.838062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.838077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.838088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.838101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.838113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.838126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.838137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.838150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.838162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.838175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.838186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.838200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.838214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.838227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.838239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.838256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.438 [2024-11-19 03:16:21.838269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.438 [2024-11-19 03:16:21.838282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.439 [2024-11-19 03:16:21.838294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.439 [2024-11-19 03:16:21.838320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.439 [2024-11-19 03:16:21.838345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.439 [2024-11-19 03:16:21.838370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.439 [2024-11-19 03:16:21.838395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.439 [2024-11-19 03:16:21.838419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.439 [2024-11-19 03:16:21.838444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.439 [2024-11-19 03:16:21.838470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.439 [2024-11-19 03:16:21.838494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.439 [2024-11-19 03:16:21.838518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.439 [2024-11-19 03:16:21.838543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.439 [2024-11-19 03:16:21.838574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.439 [2024-11-19 03:16:21.838600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.439 [2024-11-19 03:16:21.838625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.439 [2024-11-19 03:16:21.838650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976f20 is same with the state(6) to be set 00:35:11.439 [2024-11-19 03:16:21.838701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:11.439 [2024-11-19 03:16:21.838715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:11.439 [2024-11-19 03:16:21.838727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42424 len:8 PRP1 0x0 PRP2 0x0 00:35:11.439 [2024-11-19 03:16:21.838741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:11.439 [2024-11-19 03:16:21.838887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:11.439 [2024-11-19 03:16:21.838916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:11.439 [2024-11-19 03:16:21.838943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:11.439 [2024-11-19 03:16:21.838969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.439 [2024-11-19 03:16:21.838981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.439 [2024-11-19 03:16:21.842096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.439 [2024-11-19 03:16:21.842129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.439 [2024-11-19 03:16:21.842656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.439 [2024-11-19 03:16:21.842907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.439 [2024-11-19 03:16:21.842926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.439 [2024-11-19 03:16:21.843164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.439 [2024-11-19 03:16:21.843368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.439 [2024-11-19 03:16:21.843387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.439 [2024-11-19 03:16:21.843404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.439 [2024-11-19 03:16:21.843419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.439 [2024-11-19 03:16:21.855501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.439 [2024-11-19 03:16:21.855946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.439 [2024-11-19 03:16:21.855976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.439 [2024-11-19 03:16:21.856007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.439 [2024-11-19 03:16:21.856241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.439 [2024-11-19 03:16:21.856443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.439 [2024-11-19 03:16:21.856461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.439 [2024-11-19 03:16:21.856474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.439 [2024-11-19 03:16:21.856485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.439 [2024-11-19 03:16:21.868625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.439 [2024-11-19 03:16:21.868996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.439 [2024-11-19 03:16:21.869040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.439 [2024-11-19 03:16:21.869056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.439 [2024-11-19 03:16:21.869290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.439 [2024-11-19 03:16:21.869493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.439 [2024-11-19 03:16:21.869512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.439 [2024-11-19 03:16:21.869524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.439 [2024-11-19 03:16:21.869535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.439 [2024-11-19 03:16:21.881742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.439 [2024-11-19 03:16:21.882151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.439 [2024-11-19 03:16:21.882178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.439 [2024-11-19 03:16:21.882193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.439 [2024-11-19 03:16:21.882409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.439 [2024-11-19 03:16:21.882613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.439 [2024-11-19 03:16:21.882632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.439 [2024-11-19 03:16:21.882649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.439 [2024-11-19 03:16:21.882661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.439 [2024-11-19 03:16:21.894838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.439 [2024-11-19 03:16:21.895211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.439 [2024-11-19 03:16:21.895238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.440 [2024-11-19 03:16:21.895253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.440 [2024-11-19 03:16:21.895469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.440 [2024-11-19 03:16:21.895672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.440 [2024-11-19 03:16:21.895714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.440 [2024-11-19 03:16:21.895730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.440 [2024-11-19 03:16:21.895742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.440 [2024-11-19 03:16:21.907854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.440 [2024-11-19 03:16:21.908259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.440 [2024-11-19 03:16:21.908287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.440 [2024-11-19 03:16:21.908303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.440 [2024-11-19 03:16:21.908539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.440 [2024-11-19 03:16:21.908769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.440 [2024-11-19 03:16:21.908789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.440 [2024-11-19 03:16:21.908802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.440 [2024-11-19 03:16:21.908814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.440 [2024-11-19 03:16:21.920825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.440 [2024-11-19 03:16:21.921170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.440 [2024-11-19 03:16:21.921198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.440 [2024-11-19 03:16:21.921214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.440 [2024-11-19 03:16:21.921448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.440 [2024-11-19 03:16:21.921651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.440 [2024-11-19 03:16:21.921686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.440 [2024-11-19 03:16:21.921710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.440 [2024-11-19 03:16:21.921723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.440 [2024-11-19 03:16:21.933803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.440 [2024-11-19 03:16:21.934164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.440 [2024-11-19 03:16:21.934193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.440 [2024-11-19 03:16:21.934209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.440 [2024-11-19 03:16:21.934447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.440 [2024-11-19 03:16:21.934650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.440 [2024-11-19 03:16:21.934684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.440 [2024-11-19 03:16:21.934708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.440 [2024-11-19 03:16:21.934721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.440 [2024-11-19 03:16:21.946791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.440 [2024-11-19 03:16:21.947196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.440 [2024-11-19 03:16:21.947223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.440 [2024-11-19 03:16:21.947239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.440 [2024-11-19 03:16:21.947474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.440 [2024-11-19 03:16:21.947677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.440 [2024-11-19 03:16:21.947721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.440 [2024-11-19 03:16:21.947736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.440 [2024-11-19 03:16:21.947748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.440 [2024-11-19 03:16:21.959862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.440 [2024-11-19 03:16:21.960204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.440 [2024-11-19 03:16:21.960231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.440 [2024-11-19 03:16:21.960246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.440 [2024-11-19 03:16:21.960474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.440 [2024-11-19 03:16:21.960678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.440 [2024-11-19 03:16:21.960720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.440 [2024-11-19 03:16:21.960734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.440 [2024-11-19 03:16:21.960746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.440 [2024-11-19 03:16:21.972839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.440 [2024-11-19 03:16:21.973181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.440 [2024-11-19 03:16:21.973208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.440 [2024-11-19 03:16:21.973229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.440 [2024-11-19 03:16:21.973466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.440 [2024-11-19 03:16:21.973669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.440 [2024-11-19 03:16:21.973709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.440 [2024-11-19 03:16:21.973725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.440 [2024-11-19 03:16:21.973737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.440 [2024-11-19 03:16:21.985864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.440 [2024-11-19 03:16:21.986268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.440 [2024-11-19 03:16:21.986296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.440 [2024-11-19 03:16:21.986312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.440 [2024-11-19 03:16:21.986546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.440 [2024-11-19 03:16:21.986777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.440 [2024-11-19 03:16:21.986797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.440 [2024-11-19 03:16:21.986810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.440 [2024-11-19 03:16:21.986822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.440 [2024-11-19 03:16:21.998874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.440 [2024-11-19 03:16:21.999216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.440 [2024-11-19 03:16:21.999244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.440 [2024-11-19 03:16:21.999259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.441 [2024-11-19 03:16:21.999491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.441 [2024-11-19 03:16:21.999679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.441 [2024-11-19 03:16:21.999721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.441 [2024-11-19 03:16:21.999736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.441 [2024-11-19 03:16:21.999748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.441 [2024-11-19 03:16:22.012132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.441 [2024-11-19 03:16:22.012537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.441 [2024-11-19 03:16:22.012564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.441 [2024-11-19 03:16:22.012579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.441 [2024-11-19 03:16:22.012813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.441 [2024-11-19 03:16:22.013049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.441 [2024-11-19 03:16:22.013083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.441 [2024-11-19 03:16:22.013097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.441 [2024-11-19 03:16:22.013108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.441 [2024-11-19 03:16:22.025127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.441 [2024-11-19 03:16:22.025471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.441 [2024-11-19 03:16:22.025498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.441 [2024-11-19 03:16:22.025514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.441 [2024-11-19 03:16:22.025760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.441 [2024-11-19 03:16:22.025969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.441 [2024-11-19 03:16:22.025989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.441 [2024-11-19 03:16:22.026002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.441 [2024-11-19 03:16:22.026029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.441 [2024-11-19 03:16:22.038147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.441 [2024-11-19 03:16:22.038462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.441 [2024-11-19 03:16:22.038490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.441 [2024-11-19 03:16:22.038506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.441 [2024-11-19 03:16:22.038733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.441 [2024-11-19 03:16:22.038933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.441 [2024-11-19 03:16:22.038953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.441 [2024-11-19 03:16:22.038966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.441 [2024-11-19 03:16:22.038978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.699 [2024-11-19 03:16:22.051662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.699 [2024-11-19 03:16:22.052058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.699 [2024-11-19 03:16:22.052087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.699 [2024-11-19 03:16:22.052103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.699 [2024-11-19 03:16:22.052345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.699 [2024-11-19 03:16:22.052572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.699 [2024-11-19 03:16:22.052592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.699 [2024-11-19 03:16:22.052610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.699 [2024-11-19 03:16:22.052623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.699 [2024-11-19 03:16:22.064836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.699 [2024-11-19 03:16:22.065201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.699 [2024-11-19 03:16:22.065228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.699 [2024-11-19 03:16:22.065243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.699 [2024-11-19 03:16:22.065472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.699 [2024-11-19 03:16:22.065700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.699 [2024-11-19 03:16:22.065720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.699 [2024-11-19 03:16:22.065747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.699 [2024-11-19 03:16:22.065760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.700 [2024-11-19 03:16:22.077941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.700 [2024-11-19 03:16:22.078373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.700 [2024-11-19 03:16:22.078399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.700 [2024-11-19 03:16:22.078415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.700 [2024-11-19 03:16:22.078627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.700 [2024-11-19 03:16:22.078897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.700 [2024-11-19 03:16:22.078918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.700 [2024-11-19 03:16:22.078932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.700 [2024-11-19 03:16:22.078944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.700 [2024-11-19 03:16:22.091051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.700 [2024-11-19 03:16:22.091456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.700 [2024-11-19 03:16:22.091484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.700 [2024-11-19 03:16:22.091500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.700 [2024-11-19 03:16:22.091731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.700 [2024-11-19 03:16:22.091975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.700 [2024-11-19 03:16:22.091997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.700 [2024-11-19 03:16:22.092026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.700 [2024-11-19 03:16:22.092038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.700 [2024-11-19 03:16:22.104132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.700 [2024-11-19 03:16:22.104479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.700 [2024-11-19 03:16:22.104507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.700 [2024-11-19 03:16:22.104522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.700 [2024-11-19 03:16:22.104768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.700 [2024-11-19 03:16:22.104977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.700 [2024-11-19 03:16:22.105010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.700 [2024-11-19 03:16:22.105022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.700 [2024-11-19 03:16:22.105034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.700 [2024-11-19 03:16:22.117204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.700 [2024-11-19 03:16:22.117546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.700 [2024-11-19 03:16:22.117573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.700 [2024-11-19 03:16:22.117588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.700 [2024-11-19 03:16:22.117832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.700 [2024-11-19 03:16:22.118046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.700 [2024-11-19 03:16:22.118079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.700 [2024-11-19 03:16:22.118092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.700 [2024-11-19 03:16:22.118103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.700 [2024-11-19 03:16:22.130315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.700 [2024-11-19 03:16:22.130622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.700 [2024-11-19 03:16:22.130649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.700 [2024-11-19 03:16:22.130664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.700 [2024-11-19 03:16:22.130910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.700 [2024-11-19 03:16:22.131149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.700 [2024-11-19 03:16:22.131169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.700 [2024-11-19 03:16:22.131181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.700 [2024-11-19 03:16:22.131192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.700 [2024-11-19 03:16:22.143336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.700 [2024-11-19 03:16:22.143736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.700 [2024-11-19 03:16:22.143763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.700 [2024-11-19 03:16:22.143783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.700 [2024-11-19 03:16:22.144000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.700 [2024-11-19 03:16:22.144202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.700 [2024-11-19 03:16:22.144221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.700 [2024-11-19 03:16:22.144233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.700 [2024-11-19 03:16:22.144245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.700 [2024-11-19 03:16:22.156523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.700 [2024-11-19 03:16:22.156892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.700 [2024-11-19 03:16:22.156919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.700 [2024-11-19 03:16:22.156934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.700 [2024-11-19 03:16:22.157147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.700 [2024-11-19 03:16:22.157349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.700 [2024-11-19 03:16:22.157368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.700 [2024-11-19 03:16:22.157380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.700 [2024-11-19 03:16:22.157391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.700 [2024-11-19 03:16:22.169648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.700 [2024-11-19 03:16:22.169996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.700 [2024-11-19 03:16:22.170024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.700 [2024-11-19 03:16:22.170040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.700 [2024-11-19 03:16:22.170275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.700 [2024-11-19 03:16:22.170479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.700 [2024-11-19 03:16:22.170498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.700 [2024-11-19 03:16:22.170510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.700 [2024-11-19 03:16:22.170522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.700 [2024-11-19 03:16:22.182716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.700 [2024-11-19 03:16:22.183035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.700 [2024-11-19 03:16:22.183062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.700 [2024-11-19 03:16:22.183078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.700 [2024-11-19 03:16:22.183294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.700 [2024-11-19 03:16:22.183501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.700 [2024-11-19 03:16:22.183521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.700 [2024-11-19 03:16:22.183533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.700 [2024-11-19 03:16:22.183544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.700 [2024-11-19 03:16:22.195883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.700 [2024-11-19 03:16:22.196306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.700 [2024-11-19 03:16:22.196333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.700 [2024-11-19 03:16:22.196349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.700 [2024-11-19 03:16:22.196584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.700 [2024-11-19 03:16:22.196812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.700 [2024-11-19 03:16:22.196832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.700 [2024-11-19 03:16:22.196845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.701 [2024-11-19 03:16:22.196857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.701 [2024-11-19 03:16:22.208863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.701 [2024-11-19 03:16:22.209266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.701 [2024-11-19 03:16:22.209293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.701 [2024-11-19 03:16:22.209308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.701 [2024-11-19 03:16:22.209542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.701 [2024-11-19 03:16:22.209785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.701 [2024-11-19 03:16:22.209805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.701 [2024-11-19 03:16:22.209819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.701 [2024-11-19 03:16:22.209831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.701 [2024-11-19 03:16:22.221960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.701 [2024-11-19 03:16:22.222366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.701 [2024-11-19 03:16:22.222393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.701 [2024-11-19 03:16:22.222408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.701 [2024-11-19 03:16:22.222637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.701 [2024-11-19 03:16:22.222862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.701 [2024-11-19 03:16:22.222884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.701 [2024-11-19 03:16:22.222897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.701 [2024-11-19 03:16:22.222914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.701 [2024-11-19 03:16:22.235064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.701 [2024-11-19 03:16:22.235469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.701 [2024-11-19 03:16:22.235497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.701 [2024-11-19 03:16:22.235513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.701 [2024-11-19 03:16:22.235757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.701 [2024-11-19 03:16:22.235957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.701 [2024-11-19 03:16:22.235976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.701 [2024-11-19 03:16:22.236003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.701 [2024-11-19 03:16:22.236015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.701 [2024-11-19 03:16:22.248031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.701 [2024-11-19 03:16:22.248382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.701 [2024-11-19 03:16:22.248410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.701 [2024-11-19 03:16:22.248426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.701 [2024-11-19 03:16:22.248659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.701 [2024-11-19 03:16:22.248890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.701 [2024-11-19 03:16:22.248911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.701 [2024-11-19 03:16:22.248924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.701 [2024-11-19 03:16:22.248936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.701 [2024-11-19 03:16:22.261129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.701 [2024-11-19 03:16:22.261418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.701 [2024-11-19 03:16:22.261460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.701 [2024-11-19 03:16:22.261475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.701 [2024-11-19 03:16:22.261697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.701 [2024-11-19 03:16:22.261906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.701 [2024-11-19 03:16:22.261926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.701 [2024-11-19 03:16:22.261939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.701 [2024-11-19 03:16:22.261951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.701 [2024-11-19 03:16:22.274122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.701 [2024-11-19 03:16:22.274530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.701 [2024-11-19 03:16:22.274556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.701 [2024-11-19 03:16:22.274571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.701 [2024-11-19 03:16:22.274815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.701 [2024-11-19 03:16:22.275044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.701 [2024-11-19 03:16:22.275078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.701 [2024-11-19 03:16:22.275091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.701 [2024-11-19 03:16:22.275103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.701 [2024-11-19 03:16:22.287200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.701 [2024-11-19 03:16:22.287602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.701 [2024-11-19 03:16:22.287629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.701 [2024-11-19 03:16:22.287645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.701 [2024-11-19 03:16:22.287911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.701 [2024-11-19 03:16:22.288119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.701 [2024-11-19 03:16:22.288138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.701 [2024-11-19 03:16:22.288150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.701 [2024-11-19 03:16:22.288162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.701 [2024-11-19 03:16:22.300175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.701 [2024-11-19 03:16:22.300581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.701 [2024-11-19 03:16:22.300609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.701 [2024-11-19 03:16:22.300625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.701 [2024-11-19 03:16:22.300889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.701 [2024-11-19 03:16:22.301115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.701 [2024-11-19 03:16:22.301134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.701 [2024-11-19 03:16:22.301146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.701 [2024-11-19 03:16:22.301157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.701 [2024-11-19 03:16:22.313295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.701 [2024-11-19 03:16:22.313672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.701 [2024-11-19 03:16:22.313706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.701 [2024-11-19 03:16:22.313723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.701 [2024-11-19 03:16:22.313955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.701 [2024-11-19 03:16:22.314177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.701 [2024-11-19 03:16:22.314197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.701 [2024-11-19 03:16:22.314209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.701 [2024-11-19 03:16:22.314221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.961 [2024-11-19 03:16:22.326441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.961 [2024-11-19 03:16:22.326753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.961 [2024-11-19 03:16:22.326782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.961 [2024-11-19 03:16:22.326799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.961 [2024-11-19 03:16:22.327020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.961 [2024-11-19 03:16:22.327223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.961 [2024-11-19 03:16:22.327241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.961 [2024-11-19 03:16:22.327254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.961 [2024-11-19 03:16:22.327266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.961 [2024-11-19 03:16:22.339418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.961 [2024-11-19 03:16:22.339822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.961 [2024-11-19 03:16:22.339850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.961 [2024-11-19 03:16:22.339866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.961 [2024-11-19 03:16:22.340101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.961 [2024-11-19 03:16:22.340306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.961 [2024-11-19 03:16:22.340325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.961 [2024-11-19 03:16:22.340337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.961 [2024-11-19 03:16:22.340349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.961 7195.00 IOPS, 28.11 MiB/s [2024-11-19T02:16:22.576Z] [2024-11-19 03:16:22.352622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.961 [2024-11-19 03:16:22.353017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.961 [2024-11-19 03:16:22.353045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.961 [2024-11-19 03:16:22.353061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.961 [2024-11-19 03:16:22.353277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.961 [2024-11-19 03:16:22.353489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.961 [2024-11-19 03:16:22.353509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.961 [2024-11-19 03:16:22.353522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.961 [2024-11-19 03:16:22.353534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.961 [2024-11-19 03:16:22.365728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.961 [2024-11-19 03:16:22.366059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.961 [2024-11-19 03:16:22.366087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.961 [2024-11-19 03:16:22.366102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.961 [2024-11-19 03:16:22.366318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.961 [2024-11-19 03:16:22.366521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.961 [2024-11-19 03:16:22.366541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.961 [2024-11-19 03:16:22.366554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.961 [2024-11-19 03:16:22.366565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.961 [2024-11-19 03:16:22.378939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.961 [2024-11-19 03:16:22.379299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.961 [2024-11-19 03:16:22.379326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.961 [2024-11-19 03:16:22.379342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.961 [2024-11-19 03:16:22.379575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.961 [2024-11-19 03:16:22.379808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.961 [2024-11-19 03:16:22.379830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.961 [2024-11-19 03:16:22.379844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.961 [2024-11-19 03:16:22.379856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.961 [2024-11-19 03:16:22.392073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.961 [2024-11-19 03:16:22.392413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.961 [2024-11-19 03:16:22.392440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.961 [2024-11-19 03:16:22.392456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.961 [2024-11-19 03:16:22.392697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.961 [2024-11-19 03:16:22.392911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.961 [2024-11-19 03:16:22.392931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.961 [2024-11-19 03:16:22.392943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.961 [2024-11-19 03:16:22.392960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.961 [2024-11-19 03:16:22.405079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.961 [2024-11-19 03:16:22.405480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.961 [2024-11-19 03:16:22.405507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.961 [2024-11-19 03:16:22.405523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.961 [2024-11-19 03:16:22.405768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.961 [2024-11-19 03:16:22.405991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.961 [2024-11-19 03:16:22.406010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.961 [2024-11-19 03:16:22.406022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.961 [2024-11-19 03:16:22.406034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.961 [2024-11-19 03:16:22.418018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.962 [2024-11-19 03:16:22.418421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.962 [2024-11-19 03:16:22.418448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.962 [2024-11-19 03:16:22.418464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.962 [2024-11-19 03:16:22.418708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.962 [2024-11-19 03:16:22.418902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.962 [2024-11-19 03:16:22.418921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.962 [2024-11-19 03:16:22.418934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.962 [2024-11-19 03:16:22.418945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.962 [2024-11-19 03:16:22.431036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.962 [2024-11-19 03:16:22.431380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.962 [2024-11-19 03:16:22.431407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.962 [2024-11-19 03:16:22.431423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.962 [2024-11-19 03:16:22.431657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.962 [2024-11-19 03:16:22.431888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.962 [2024-11-19 03:16:22.431910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.962 [2024-11-19 03:16:22.431923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.962 [2024-11-19 03:16:22.431934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.962 [2024-11-19 03:16:22.444002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.962 [2024-11-19 03:16:22.444352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.962 [2024-11-19 03:16:22.444379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.962 [2024-11-19 03:16:22.444395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.962 [2024-11-19 03:16:22.444631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.962 [2024-11-19 03:16:22.444863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.962 [2024-11-19 03:16:22.444884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.962 [2024-11-19 03:16:22.444898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.962 [2024-11-19 03:16:22.444910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.962 [2024-11-19 03:16:22.456991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.962 [2024-11-19 03:16:22.457372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.962 [2024-11-19 03:16:22.457399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.962 [2024-11-19 03:16:22.457414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.962 [2024-11-19 03:16:22.457631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.962 [2024-11-19 03:16:22.457865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.962 [2024-11-19 03:16:22.457886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.962 [2024-11-19 03:16:22.457900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.962 [2024-11-19 03:16:22.457913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.962 [2024-11-19 03:16:22.470071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.962 [2024-11-19 03:16:22.470474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.962 [2024-11-19 03:16:22.470503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.962 [2024-11-19 03:16:22.470519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.962 [2024-11-19 03:16:22.470766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.962 [2024-11-19 03:16:22.470980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.962 [2024-11-19 03:16:22.471000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.962 [2024-11-19 03:16:22.471012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.962 [2024-11-19 03:16:22.471025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.962 [2024-11-19 03:16:22.483073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.962 [2024-11-19 03:16:22.483411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.962 [2024-11-19 03:16:22.483438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.962 [2024-11-19 03:16:22.483453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.962 [2024-11-19 03:16:22.483697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.962 [2024-11-19 03:16:22.483911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.962 [2024-11-19 03:16:22.483931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.962 [2024-11-19 03:16:22.483945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.962 [2024-11-19 03:16:22.483957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.962 [2024-11-19 03:16:22.496430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.962 [2024-11-19 03:16:22.496800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.962 [2024-11-19 03:16:22.496830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.962 [2024-11-19 03:16:22.496846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.962 [2024-11-19 03:16:22.497079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.962 [2024-11-19 03:16:22.497290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.962 [2024-11-19 03:16:22.497310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.962 [2024-11-19 03:16:22.497323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.962 [2024-11-19 03:16:22.497334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.962 [2024-11-19 03:16:22.509728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.962 [2024-11-19 03:16:22.510073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.962 [2024-11-19 03:16:22.510101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.962 [2024-11-19 03:16:22.510116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.962 [2024-11-19 03:16:22.510337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.962 [2024-11-19 03:16:22.510545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.962 [2024-11-19 03:16:22.510564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.962 [2024-11-19 03:16:22.510577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.962 [2024-11-19 03:16:22.510589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.962 [2024-11-19 03:16:22.522885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.962 [2024-11-19 03:16:22.523265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.963 [2024-11-19 03:16:22.523293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.963 [2024-11-19 03:16:22.523308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.963 [2024-11-19 03:16:22.523542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.963 [2024-11-19 03:16:22.523775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.963 [2024-11-19 03:16:22.523802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.963 [2024-11-19 03:16:22.523816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.963 [2024-11-19 03:16:22.523829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.963 [2024-11-19 03:16:22.536048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.963 [2024-11-19 03:16:22.536451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.963 [2024-11-19 03:16:22.536478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.963 [2024-11-19 03:16:22.536494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.963 [2024-11-19 03:16:22.536719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.963 [2024-11-19 03:16:22.536919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.963 [2024-11-19 03:16:22.536939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.963 [2024-11-19 03:16:22.536952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.963 [2024-11-19 03:16:22.536964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.963 [2024-11-19 03:16:22.549482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.963 [2024-11-19 03:16:22.549892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.963 [2024-11-19 03:16:22.549921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.963 [2024-11-19 03:16:22.549937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.963 [2024-11-19 03:16:22.550193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.963 [2024-11-19 03:16:22.550382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.963 [2024-11-19 03:16:22.550401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.963 [2024-11-19 03:16:22.550415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.963 [2024-11-19 03:16:22.550426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.963 [2024-11-19 03:16:22.562620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.963 [2024-11-19 03:16:22.563016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.963 [2024-11-19 03:16:22.563045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.963 [2024-11-19 03:16:22.563061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.963 [2024-11-19 03:16:22.563295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.963 [2024-11-19 03:16:22.563498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.963 [2024-11-19 03:16:22.563517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.963 [2024-11-19 03:16:22.563530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.963 [2024-11-19 03:16:22.563546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.963 [2024-11-19 03:16:22.576246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.963 [2024-11-19 03:16:22.576583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.963 [2024-11-19 03:16:22.576610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:11.963 [2024-11-19 03:16:22.576626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:11.963 [2024-11-19 03:16:22.576869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:11.963 [2024-11-19 03:16:22.577103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.963 [2024-11-19 03:16:22.577122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.963 [2024-11-19 03:16:22.577135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.963 [2024-11-19 03:16:22.577147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.242 [2024-11-19 03:16:22.589859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.242 [2024-11-19 03:16:22.590217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.242 [2024-11-19 03:16:22.590245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.242 [2024-11-19 03:16:22.590262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.242 [2024-11-19 03:16:22.590490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.242 [2024-11-19 03:16:22.590774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.242 [2024-11-19 03:16:22.590796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.242 [2024-11-19 03:16:22.590811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.242 [2024-11-19 03:16:22.590824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.242 [2024-11-19 03:16:22.603370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.242 [2024-11-19 03:16:22.603703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.242 [2024-11-19 03:16:22.603748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.242 [2024-11-19 03:16:22.603766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.242 [2024-11-19 03:16:22.603997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.242 [2024-11-19 03:16:22.604219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.242 [2024-11-19 03:16:22.604240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.242 [2024-11-19 03:16:22.604254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.242 [2024-11-19 03:16:22.604266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.242 [2024-11-19 03:16:22.616601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.242 [2024-11-19 03:16:22.617011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.242 [2024-11-19 03:16:22.617054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.242 [2024-11-19 03:16:22.617070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.242 [2024-11-19 03:16:22.617302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.242 [2024-11-19 03:16:22.617503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.242 [2024-11-19 03:16:22.617522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.242 [2024-11-19 03:16:22.617534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.242 [2024-11-19 03:16:22.617545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.242 [2024-11-19 03:16:22.630000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.242 [2024-11-19 03:16:22.630348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.242 [2024-11-19 03:16:22.630376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.242 [2024-11-19 03:16:22.630392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.242 [2024-11-19 03:16:22.630615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.242 [2024-11-19 03:16:22.630858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.242 [2024-11-19 03:16:22.630879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.242 [2024-11-19 03:16:22.630893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.242 [2024-11-19 03:16:22.630905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.242 [2024-11-19 03:16:22.643324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.242 [2024-11-19 03:16:22.643757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.242 [2024-11-19 03:16:22.643786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.242 [2024-11-19 03:16:22.643812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.242 [2024-11-19 03:16:22.644037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.242 [2024-11-19 03:16:22.644239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.242 [2024-11-19 03:16:22.644257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.242 [2024-11-19 03:16:22.644271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.242 [2024-11-19 03:16:22.644282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.242 [2024-11-19 03:16:22.656609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.242 [2024-11-19 03:16:22.657003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.242 [2024-11-19 03:16:22.657032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.242 [2024-11-19 03:16:22.657064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.242 [2024-11-19 03:16:22.657304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.242 [2024-11-19 03:16:22.657492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.242 [2024-11-19 03:16:22.657510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.242 [2024-11-19 03:16:22.657523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.242 [2024-11-19 03:16:22.657534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.243 [2024-11-19 03:16:22.669776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.243 [2024-11-19 03:16:22.670143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.243 [2024-11-19 03:16:22.670171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.243 [2024-11-19 03:16:22.670186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.243 [2024-11-19 03:16:22.670422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.243 [2024-11-19 03:16:22.670625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.243 [2024-11-19 03:16:22.670644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.243 [2024-11-19 03:16:22.670657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.243 [2024-11-19 03:16:22.670681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.243 [2024-11-19 03:16:22.682843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.243 [2024-11-19 03:16:22.683185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.243 [2024-11-19 03:16:22.683213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.243 [2024-11-19 03:16:22.683229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.243 [2024-11-19 03:16:22.683464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.243 [2024-11-19 03:16:22.683681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.243 [2024-11-19 03:16:22.683708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.243 [2024-11-19 03:16:22.683721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.243 [2024-11-19 03:16:22.683747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.243 [2024-11-19 03:16:22.695813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.243 [2024-11-19 03:16:22.696119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.243 [2024-11-19 03:16:22.696146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.243 [2024-11-19 03:16:22.696161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.243 [2024-11-19 03:16:22.696378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.243 [2024-11-19 03:16:22.696581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.243 [2024-11-19 03:16:22.696604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.243 [2024-11-19 03:16:22.696618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.243 [2024-11-19 03:16:22.696629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.243 [2024-11-19 03:16:22.708919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.243 [2024-11-19 03:16:22.709332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.243 [2024-11-19 03:16:22.709358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.243 [2024-11-19 03:16:22.709372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.243 [2024-11-19 03:16:22.709601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.243 [2024-11-19 03:16:22.709831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.243 [2024-11-19 03:16:22.709851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.243 [2024-11-19 03:16:22.709864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.243 [2024-11-19 03:16:22.709876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.243 [2024-11-19 03:16:22.721883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.243 [2024-11-19 03:16:22.722290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.243 [2024-11-19 03:16:22.722317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.243 [2024-11-19 03:16:22.722333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.243 [2024-11-19 03:16:22.722573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.243 [2024-11-19 03:16:22.722801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.243 [2024-11-19 03:16:22.722821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.243 [2024-11-19 03:16:22.722834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.243 [2024-11-19 03:16:22.722846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.243 [2024-11-19 03:16:22.734890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.243 [2024-11-19 03:16:22.735294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.243 [2024-11-19 03:16:22.735321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.243 [2024-11-19 03:16:22.735337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.243 [2024-11-19 03:16:22.735577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.243 [2024-11-19 03:16:22.735819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.243 [2024-11-19 03:16:22.735840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.243 [2024-11-19 03:16:22.735854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.243 [2024-11-19 03:16:22.735870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.243 [2024-11-19 03:16:22.748030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.243 [2024-11-19 03:16:22.748443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.243 [2024-11-19 03:16:22.748471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.243 [2024-11-19 03:16:22.748497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.243 [2024-11-19 03:16:22.748743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.243 [2024-11-19 03:16:22.748958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.243 [2024-11-19 03:16:22.748978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.243 [2024-11-19 03:16:22.749006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.243 [2024-11-19 03:16:22.749018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.243 [2024-11-19 03:16:22.761174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.243 [2024-11-19 03:16:22.761523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.243 [2024-11-19 03:16:22.761551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.243 [2024-11-19 03:16:22.761567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.243 [2024-11-19 03:16:22.761810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.243 [2024-11-19 03:16:22.762004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.243 [2024-11-19 03:16:22.762037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.243 [2024-11-19 03:16:22.762050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.243 [2024-11-19 03:16:22.762062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.243 [2024-11-19 03:16:22.774241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.243 [2024-11-19 03:16:22.774649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.243 [2024-11-19 03:16:22.774676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.243 [2024-11-19 03:16:22.774722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.243 [2024-11-19 03:16:22.774963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.243 [2024-11-19 03:16:22.775183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.243 [2024-11-19 03:16:22.775202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.243 [2024-11-19 03:16:22.775215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.243 [2024-11-19 03:16:22.775226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.243 [2024-11-19 03:16:22.787296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.243 [2024-11-19 03:16:22.787647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.243 [2024-11-19 03:16:22.787711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.243 [2024-11-19 03:16:22.787729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.243 [2024-11-19 03:16:22.787982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.243 [2024-11-19 03:16:22.788202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.244 [2024-11-19 03:16:22.788221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.244 [2024-11-19 03:16:22.788233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.244 [2024-11-19 03:16:22.788244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.244 [2024-11-19 03:16:22.800483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.244 [2024-11-19 03:16:22.800874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.244 [2024-11-19 03:16:22.800916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.244 [2024-11-19 03:16:22.800932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.244 [2024-11-19 03:16:22.801167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.244 [2024-11-19 03:16:22.801370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.244 [2024-11-19 03:16:22.801389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.244 [2024-11-19 03:16:22.801402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.244 [2024-11-19 03:16:22.801413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.244 [2024-11-19 03:16:22.814166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.244 [2024-11-19 03:16:22.814519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.244 [2024-11-19 03:16:22.814560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.244 [2024-11-19 03:16:22.814575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.244 [2024-11-19 03:16:22.814819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.244 [2024-11-19 03:16:22.815042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.244 [2024-11-19 03:16:22.815064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.244 [2024-11-19 03:16:22.815076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.244 [2024-11-19 03:16:22.815087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.244 [2024-11-19 03:16:22.827336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.244 [2024-11-19 03:16:22.827767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.244 [2024-11-19 03:16:22.827796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.244 [2024-11-19 03:16:22.827811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.244 [2024-11-19 03:16:22.828057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.244 [2024-11-19 03:16:22.828275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.244 [2024-11-19 03:16:22.828295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.244 [2024-11-19 03:16:22.828308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.244 [2024-11-19 03:16:22.828320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.244 [2024-11-19 03:16:22.841161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.244 [2024-11-19 03:16:22.841547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.244 [2024-11-19 03:16:22.841575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.244 [2024-11-19 03:16:22.841592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.244 [2024-11-19 03:16:22.841815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.244 [2024-11-19 03:16:22.842049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.244 [2024-11-19 03:16:22.842070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.244 [2024-11-19 03:16:22.842085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.244 [2024-11-19 03:16:22.842098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.527 [2024-11-19 03:16:22.854484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.527 [2024-11-19 03:16:22.854799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.527 [2024-11-19 03:16:22.854829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.527 [2024-11-19 03:16:22.854846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.527 [2024-11-19 03:16:22.855083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.527 [2024-11-19 03:16:22.855292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.527 [2024-11-19 03:16:22.855311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.527 [2024-11-19 03:16:22.855323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.527 [2024-11-19 03:16:22.855335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.527 [2024-11-19 03:16:22.868316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.527 [2024-11-19 03:16:22.868706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.527 [2024-11-19 03:16:22.868755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.527 [2024-11-19 03:16:22.868772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.527 [2024-11-19 03:16:22.869001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.527 [2024-11-19 03:16:22.869228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.527 [2024-11-19 03:16:22.869265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.527 [2024-11-19 03:16:22.869295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.527 [2024-11-19 03:16:22.869308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.527 [2024-11-19 03:16:22.881806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.527 [2024-11-19 03:16:22.882177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.527 [2024-11-19 03:16:22.882222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.527 [2024-11-19 03:16:22.882254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.527 [2024-11-19 03:16:22.882492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.527 [2024-11-19 03:16:22.882703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.527 [2024-11-19 03:16:22.882724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.527 [2024-11-19 03:16:22.882752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.527 [2024-11-19 03:16:22.882765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.527 [2024-11-19 03:16:22.895514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.527 [2024-11-19 03:16:22.895832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.527 [2024-11-19 03:16:22.895861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.527 [2024-11-19 03:16:22.895877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.527 [2024-11-19 03:16:22.896106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.527 [2024-11-19 03:16:22.896321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.527 [2024-11-19 03:16:22.896340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.527 [2024-11-19 03:16:22.896354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.527 [2024-11-19 03:16:22.896365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.527 [2024-11-19 03:16:22.908813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.527 [2024-11-19 03:16:22.909258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.527 [2024-11-19 03:16:22.909286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.527 [2024-11-19 03:16:22.909302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.527 [2024-11-19 03:16:22.909535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.527 [2024-11-19 03:16:22.909753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.527 [2024-11-19 03:16:22.909775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.527 [2024-11-19 03:16:22.909789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.527 [2024-11-19 03:16:22.909802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.527 [2024-11-19 03:16:22.922060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.527 [2024-11-19 03:16:22.922463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.527 [2024-11-19 03:16:22.922491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.527 [2024-11-19 03:16:22.922507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.527 [2024-11-19 03:16:22.922756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.528 [2024-11-19 03:16:22.922998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.528 [2024-11-19 03:16:22.923018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.528 [2024-11-19 03:16:22.923032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.528 [2024-11-19 03:16:22.923044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.528 [2024-11-19 03:16:22.935226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.528 [2024-11-19 03:16:22.935648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.528 [2024-11-19 03:16:22.935704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.528 [2024-11-19 03:16:22.935721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.528 [2024-11-19 03:16:22.935947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.528 [2024-11-19 03:16:22.936150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.528 [2024-11-19 03:16:22.936169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.528 [2024-11-19 03:16:22.936181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.528 [2024-11-19 03:16:22.936192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.528 [2024-11-19 03:16:22.948229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.528 [2024-11-19 03:16:22.948653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.528 [2024-11-19 03:16:22.948705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.528 [2024-11-19 03:16:22.948733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.528 [2024-11-19 03:16:22.948979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.528 [2024-11-19 03:16:22.949166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.528 [2024-11-19 03:16:22.949185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.528 [2024-11-19 03:16:22.949197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.528 [2024-11-19 03:16:22.949208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.528 [2024-11-19 03:16:22.961335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.528 [2024-11-19 03:16:22.961681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.528 [2024-11-19 03:16:22.961747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.528 [2024-11-19 03:16:22.961764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.528 [2024-11-19 03:16:22.962001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.528 [2024-11-19 03:16:22.962203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.528 [2024-11-19 03:16:22.962222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.528 [2024-11-19 03:16:22.962235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.528 [2024-11-19 03:16:22.962246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.528 [2024-11-19 03:16:22.974301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.528 [2024-11-19 03:16:22.974714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.528 [2024-11-19 03:16:22.974742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.528 [2024-11-19 03:16:22.974764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.528 [2024-11-19 03:16:22.975001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.528 [2024-11-19 03:16:22.975205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.528 [2024-11-19 03:16:22.975224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.528 [2024-11-19 03:16:22.975236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.528 [2024-11-19 03:16:22.975248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.528 [2024-11-19 03:16:22.987473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.528 [2024-11-19 03:16:22.987856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.528 [2024-11-19 03:16:22.987886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.528 [2024-11-19 03:16:22.987902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.528 [2024-11-19 03:16:22.988156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.528 [2024-11-19 03:16:22.988345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.528 [2024-11-19 03:16:22.988365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.528 [2024-11-19 03:16:22.988378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.528 [2024-11-19 03:16:22.988389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.528 [2024-11-19 03:16:23.000616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.528 [2024-11-19 03:16:23.000952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.528 [2024-11-19 03:16:23.000981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.528 [2024-11-19 03:16:23.000997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.528 [2024-11-19 03:16:23.001226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.528 [2024-11-19 03:16:23.001431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.528 [2024-11-19 03:16:23.001452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.528 [2024-11-19 03:16:23.001465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.528 [2024-11-19 03:16:23.001477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.528 [2024-11-19 03:16:23.013684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.528 [2024-11-19 03:16:23.014059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.528 [2024-11-19 03:16:23.014086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.528 [2024-11-19 03:16:23.014102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.528 [2024-11-19 03:16:23.014317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.528 [2024-11-19 03:16:23.014521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.528 [2024-11-19 03:16:23.014541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.528 [2024-11-19 03:16:23.014554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.528 [2024-11-19 03:16:23.014566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.528 [2024-11-19 03:16:23.026784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.528 [2024-11-19 03:16:23.027128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.528 [2024-11-19 03:16:23.027155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.528 [2024-11-19 03:16:23.027169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.528 [2024-11-19 03:16:23.027398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.528 [2024-11-19 03:16:23.027601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.528 [2024-11-19 03:16:23.027620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.528 [2024-11-19 03:16:23.027632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.528 [2024-11-19 03:16:23.027643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.528 [2024-11-19 03:16:23.039848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.528 [2024-11-19 03:16:23.040195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.528 [2024-11-19 03:16:23.040223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.528 [2024-11-19 03:16:23.040239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.528 [2024-11-19 03:16:23.040477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.528 [2024-11-19 03:16:23.040707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.528 [2024-11-19 03:16:23.040728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.528 [2024-11-19 03:16:23.040761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.528 [2024-11-19 03:16:23.040776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.528 [2024-11-19 03:16:23.052815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.528 [2024-11-19 03:16:23.053219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.528 [2024-11-19 03:16:23.053246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.528 [2024-11-19 03:16:23.053261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.528 [2024-11-19 03:16:23.053490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.529 [2024-11-19 03:16:23.053719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.529 [2024-11-19 03:16:23.053739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.529 [2024-11-19 03:16:23.053752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.529 [2024-11-19 03:16:23.053765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.529 [2024-11-19 03:16:23.065841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.529 [2024-11-19 03:16:23.066133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.529 [2024-11-19 03:16:23.066160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.529 [2024-11-19 03:16:23.066176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.529 [2024-11-19 03:16:23.066386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.529 [2024-11-19 03:16:23.066589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.529 [2024-11-19 03:16:23.066610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.529 [2024-11-19 03:16:23.066623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.529 [2024-11-19 03:16:23.066634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.529 [2024-11-19 03:16:23.078948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.529 [2024-11-19 03:16:23.079307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.529 [2024-11-19 03:16:23.079336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.529 [2024-11-19 03:16:23.079353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.529 [2024-11-19 03:16:23.079590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.529 [2024-11-19 03:16:23.079861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.529 [2024-11-19 03:16:23.079883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.529 [2024-11-19 03:16:23.079898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.529 [2024-11-19 03:16:23.079910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.529 [2024-11-19 03:16:23.092058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.529 [2024-11-19 03:16:23.092404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.529 [2024-11-19 03:16:23.092433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.529 [2024-11-19 03:16:23.092449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.529 [2024-11-19 03:16:23.092682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.529 [2024-11-19 03:16:23.092904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.529 [2024-11-19 03:16:23.092924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.529 [2024-11-19 03:16:23.092937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.529 [2024-11-19 03:16:23.092950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.529 [2024-11-19 03:16:23.105090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.529 [2024-11-19 03:16:23.105497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.529 [2024-11-19 03:16:23.105526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.529 [2024-11-19 03:16:23.105541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.529 [2024-11-19 03:16:23.105776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.529 [2024-11-19 03:16:23.105981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.529 [2024-11-19 03:16:23.106015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.529 [2024-11-19 03:16:23.106029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.529 [2024-11-19 03:16:23.106041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.529 [2024-11-19 03:16:23.118821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.529 [2024-11-19 03:16:23.119201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.529 [2024-11-19 03:16:23.119232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.529 [2024-11-19 03:16:23.119248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.529 [2024-11-19 03:16:23.119472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.529 [2024-11-19 03:16:23.119677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.529 [2024-11-19 03:16:23.119712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.529 [2024-11-19 03:16:23.119729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.529 [2024-11-19 03:16:23.119742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.837 [2024-11-19 03:16:23.132282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.837 [2024-11-19 03:16:23.132662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.837 [2024-11-19 03:16:23.132724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.837 [2024-11-19 03:16:23.132749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.837 [2024-11-19 03:16:23.132966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.837 [2024-11-19 03:16:23.133204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.837 [2024-11-19 03:16:23.133227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.837 [2024-11-19 03:16:23.133241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.837 [2024-11-19 03:16:23.133253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.837 [2024-11-19 03:16:23.145893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.837 [2024-11-19 03:16:23.146285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.837 [2024-11-19 03:16:23.146314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.837 [2024-11-19 03:16:23.146330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.837 [2024-11-19 03:16:23.146564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.837 [2024-11-19 03:16:23.146832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.837 [2024-11-19 03:16:23.146855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.837 [2024-11-19 03:16:23.146871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.837 [2024-11-19 03:16:23.146884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.837 [2024-11-19 03:16:23.159646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.837 [2024-11-19 03:16:23.160130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.837 [2024-11-19 03:16:23.160177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.837 [2024-11-19 03:16:23.160194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.837 [2024-11-19 03:16:23.160437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.837 [2024-11-19 03:16:23.160639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.837 [2024-11-19 03:16:23.160659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.837 [2024-11-19 03:16:23.160699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.837 [2024-11-19 03:16:23.160717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.837 [2024-11-19 03:16:23.172754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.837 [2024-11-19 03:16:23.173162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.837 [2024-11-19 03:16:23.173191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.837 [2024-11-19 03:16:23.173207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.837 [2024-11-19 03:16:23.173443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.837 [2024-11-19 03:16:23.173650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.837 [2024-11-19 03:16:23.173671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.837 [2024-11-19 03:16:23.173683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.837 [2024-11-19 03:16:23.173722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.837 [2024-11-19 03:16:23.185843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.837 [2024-11-19 03:16:23.186148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.837 [2024-11-19 03:16:23.186191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.837 [2024-11-19 03:16:23.186207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.837 [2024-11-19 03:16:23.186424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.837 [2024-11-19 03:16:23.186634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.837 [2024-11-19 03:16:23.186655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.837 [2024-11-19 03:16:23.186667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.837 [2024-11-19 03:16:23.186679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.837 [2024-11-19 03:16:23.199020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.837 [2024-11-19 03:16:23.199365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.837 [2024-11-19 03:16:23.199393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.837 [2024-11-19 03:16:23.199409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.837 [2024-11-19 03:16:23.199644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.837 [2024-11-19 03:16:23.199882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.837 [2024-11-19 03:16:23.199904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.837 [2024-11-19 03:16:23.199918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.837 [2024-11-19 03:16:23.199931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.837 [2024-11-19 03:16:23.212091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.837 [2024-11-19 03:16:23.212400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.837 [2024-11-19 03:16:23.212428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.837 [2024-11-19 03:16:23.212444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.837 [2024-11-19 03:16:23.212658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.837 [2024-11-19 03:16:23.212894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.837 [2024-11-19 03:16:23.212915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.837 [2024-11-19 03:16:23.212934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.837 [2024-11-19 03:16:23.212948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.837 [2024-11-19 03:16:23.225050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.837 [2024-11-19 03:16:23.225394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.837 [2024-11-19 03:16:23.225422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.837 [2024-11-19 03:16:23.225438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.837 [2024-11-19 03:16:23.225669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.837 [2024-11-19 03:16:23.225894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.837 [2024-11-19 03:16:23.225916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.837 [2024-11-19 03:16:23.225930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.837 [2024-11-19 03:16:23.225942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.837 [2024-11-19 03:16:23.238111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.838 [2024-11-19 03:16:23.238459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.838 [2024-11-19 03:16:23.238487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.838 [2024-11-19 03:16:23.238503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.838 [2024-11-19 03:16:23.238748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.838 [2024-11-19 03:16:23.238962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.838 [2024-11-19 03:16:23.238981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.838 [2024-11-19 03:16:23.239010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.838 [2024-11-19 03:16:23.239023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.838 [2024-11-19 03:16:23.251145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.838 [2024-11-19 03:16:23.251512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.838 [2024-11-19 03:16:23.251541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.838 [2024-11-19 03:16:23.251558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.838 [2024-11-19 03:16:23.251804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.838 [2024-11-19 03:16:23.252027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.838 [2024-11-19 03:16:23.252048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.838 [2024-11-19 03:16:23.252060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.838 [2024-11-19 03:16:23.252072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.838 [2024-11-19 03:16:23.264141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.838 [2024-11-19 03:16:23.264484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.838 [2024-11-19 03:16:23.264511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.838 [2024-11-19 03:16:23.264527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.838 [2024-11-19 03:16:23.264769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.838 [2024-11-19 03:16:23.264962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.838 [2024-11-19 03:16:23.264983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.838 [2024-11-19 03:16:23.265011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.838 [2024-11-19 03:16:23.265023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.838 [2024-11-19 03:16:23.277250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.838 [2024-11-19 03:16:23.277653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.838 [2024-11-19 03:16:23.277682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.838 [2024-11-19 03:16:23.277708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.838 [2024-11-19 03:16:23.277931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.838 [2024-11-19 03:16:23.278153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.838 [2024-11-19 03:16:23.278172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.838 [2024-11-19 03:16:23.278184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.838 [2024-11-19 03:16:23.278196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.838 [2024-11-19 03:16:23.290295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.838 [2024-11-19 03:16:23.290724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.838 [2024-11-19 03:16:23.290752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.838 [2024-11-19 03:16:23.290768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.838 [2024-11-19 03:16:23.291012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.838 [2024-11-19 03:16:23.291199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.838 [2024-11-19 03:16:23.291219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.838 [2024-11-19 03:16:23.291233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.838 [2024-11-19 03:16:23.291245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.838 [2024-11-19 03:16:23.303323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.838 [2024-11-19 03:16:23.303737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.838 [2024-11-19 03:16:23.303766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.838 [2024-11-19 03:16:23.303787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.838 [2024-11-19 03:16:23.304035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.838 [2024-11-19 03:16:23.304222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.838 [2024-11-19 03:16:23.304240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.838 [2024-11-19 03:16:23.304253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.838 [2024-11-19 03:16:23.304265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.838 [2024-11-19 03:16:23.316302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.838 [2024-11-19 03:16:23.316647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.838 [2024-11-19 03:16:23.316676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.838 [2024-11-19 03:16:23.316718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.838 [2024-11-19 03:16:23.316961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.838 [2024-11-19 03:16:23.317182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.838 [2024-11-19 03:16:23.317202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.838 [2024-11-19 03:16:23.317215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.838 [2024-11-19 03:16:23.317226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.838 [2024-11-19 03:16:23.329287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.838 [2024-11-19 03:16:23.329676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.838 [2024-11-19 03:16:23.329711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.838 [2024-11-19 03:16:23.329742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.838 [2024-11-19 03:16:23.329972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.838 [2024-11-19 03:16:23.330185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.838 [2024-11-19 03:16:23.330220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.838 [2024-11-19 03:16:23.330234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.838 [2024-11-19 03:16:23.330246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.838 [2024-11-19 03:16:23.342421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.838 [2024-11-19 03:16:23.342829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.838 [2024-11-19 03:16:23.342858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.838 [2024-11-19 03:16:23.342874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.838 [2024-11-19 03:16:23.343107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.838 [2024-11-19 03:16:23.343300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.838 [2024-11-19 03:16:23.343320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.838 [2024-11-19 03:16:23.343332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.838 [2024-11-19 03:16:23.343345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.838 5396.25 IOPS, 21.08 MiB/s [2024-11-19T02:16:23.453Z] [2024-11-19 03:16:23.355455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.838 [2024-11-19 03:16:23.355798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.838 [2024-11-19 03:16:23.355826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.838 [2024-11-19 03:16:23.355843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.838 [2024-11-19 03:16:23.356059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.838 [2024-11-19 03:16:23.356262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.839 [2024-11-19 03:16:23.356281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.839 [2024-11-19 03:16:23.356294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.839 [2024-11-19 03:16:23.356305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.839 [2024-11-19 03:16:23.368945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.839 [2024-11-19 03:16:23.369381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.839 [2024-11-19 03:16:23.369409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.839 [2024-11-19 03:16:23.369425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.839 [2024-11-19 03:16:23.369660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.839 [2024-11-19 03:16:23.369861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.839 [2024-11-19 03:16:23.369880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.839 [2024-11-19 03:16:23.369893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.839 [2024-11-19 03:16:23.369907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.839 [2024-11-19 03:16:23.382012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.839 [2024-11-19 03:16:23.382353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.839 [2024-11-19 03:16:23.382382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.839 [2024-11-19 03:16:23.382398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.839 [2024-11-19 03:16:23.382634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.839 [2024-11-19 03:16:23.382869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.839 [2024-11-19 03:16:23.382899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.839 [2024-11-19 03:16:23.382918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.839 [2024-11-19 03:16:23.382932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.839 [2024-11-19 03:16:23.395084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.839 [2024-11-19 03:16:23.395455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.839 [2024-11-19 03:16:23.395483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.839 [2024-11-19 03:16:23.395498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.839 [2024-11-19 03:16:23.395726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.839 [2024-11-19 03:16:23.395925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.839 [2024-11-19 03:16:23.395945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.839 [2024-11-19 03:16:23.395958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.839 [2024-11-19 03:16:23.395971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.839 [2024-11-19 03:16:23.408146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.839 [2024-11-19 03:16:23.408550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.839 [2024-11-19 03:16:23.408578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.839 [2024-11-19 03:16:23.408594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.839 [2024-11-19 03:16:23.408839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.839 [2024-11-19 03:16:23.409068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.839 [2024-11-19 03:16:23.409088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.839 [2024-11-19 03:16:23.409101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.839 [2024-11-19 03:16:23.409112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.839 [2024-11-19 03:16:23.421446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.839 [2024-11-19 03:16:23.421842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.839 [2024-11-19 03:16:23.421874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.839 [2024-11-19 03:16:23.421890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.839 [2024-11-19 03:16:23.422121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.839 [2024-11-19 03:16:23.422334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.839 [2024-11-19 03:16:23.422356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.839 [2024-11-19 03:16:23.422371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.839 [2024-11-19 03:16:23.422384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.839 [2024-11-19 03:16:23.434786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.839 [2024-11-19 03:16:23.435193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.839 [2024-11-19 03:16:23.435243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:12.839 [2024-11-19 03:16:23.435260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:12.839 [2024-11-19 03:16:23.435511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:12.839 [2024-11-19 03:16:23.435734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.839 [2024-11-19 03:16:23.435756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.839 [2024-11-19 03:16:23.435769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.839 [2024-11-19 03:16:23.435782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.158 [2024-11-19 03:16:23.448633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.158 [2024-11-19 03:16:23.449060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.158 [2024-11-19 03:16:23.449110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.158 [2024-11-19 03:16:23.449127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.158 [2024-11-19 03:16:23.449358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.158 [2024-11-19 03:16:23.449585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.159 [2024-11-19 03:16:23.449607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.159 [2024-11-19 03:16:23.449636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.159 [2024-11-19 03:16:23.449650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.159 [2024-11-19 03:16:23.462133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.159 [2024-11-19 03:16:23.462560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.159 [2024-11-19 03:16:23.462590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.159 [2024-11-19 03:16:23.462608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.159 [2024-11-19 03:16:23.462865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.159 [2024-11-19 03:16:23.463093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.159 [2024-11-19 03:16:23.463115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.159 [2024-11-19 03:16:23.463128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.159 [2024-11-19 03:16:23.463157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.159 [2024-11-19 03:16:23.475516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.159 [2024-11-19 03:16:23.475871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.159 [2024-11-19 03:16:23.475902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.159 [2024-11-19 03:16:23.475924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.159 [2024-11-19 03:16:23.476180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.159 [2024-11-19 03:16:23.476375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.159 [2024-11-19 03:16:23.476396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.159 [2024-11-19 03:16:23.476409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.159 [2024-11-19 03:16:23.476421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.159 [2024-11-19 03:16:23.488633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.159 [2024-11-19 03:16:23.489053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.159 [2024-11-19 03:16:23.489083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.159 [2024-11-19 03:16:23.489100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.159 [2024-11-19 03:16:23.489336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.159 [2024-11-19 03:16:23.489540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.159 [2024-11-19 03:16:23.489560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.159 [2024-11-19 03:16:23.489573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.159 [2024-11-19 03:16:23.489584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.159 [2024-11-19 03:16:23.501621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.159 [2024-11-19 03:16:23.501966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.159 [2024-11-19 03:16:23.501994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.159 [2024-11-19 03:16:23.502009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.159 [2024-11-19 03:16:23.502219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.159 [2024-11-19 03:16:23.502422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.159 [2024-11-19 03:16:23.502443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.159 [2024-11-19 03:16:23.502456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.159 [2024-11-19 03:16:23.502467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.159 [2024-11-19 03:16:23.514713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.159 [2024-11-19 03:16:23.515055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.159 [2024-11-19 03:16:23.515084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.159 [2024-11-19 03:16:23.515101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.159 [2024-11-19 03:16:23.515336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.159 [2024-11-19 03:16:23.515543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.159 [2024-11-19 03:16:23.515564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.159 [2024-11-19 03:16:23.515577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.159 [2024-11-19 03:16:23.515589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.159 [2024-11-19 03:16:23.527768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.159 [2024-11-19 03:16:23.528174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.159 [2024-11-19 03:16:23.528202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.159 [2024-11-19 03:16:23.528218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.159 [2024-11-19 03:16:23.528453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.159 [2024-11-19 03:16:23.528656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.159 [2024-11-19 03:16:23.528676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.159 [2024-11-19 03:16:23.528699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.159 [2024-11-19 03:16:23.528729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.159 [2024-11-19 03:16:23.540859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.159 [2024-11-19 03:16:23.541265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.159 [2024-11-19 03:16:23.541294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.159 [2024-11-19 03:16:23.541310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.159 [2024-11-19 03:16:23.541545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.159 [2024-11-19 03:16:23.541796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.159 [2024-11-19 03:16:23.541819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.159 [2024-11-19 03:16:23.541834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.159 [2024-11-19 03:16:23.541847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.159 [2024-11-19 03:16:23.553982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.159 [2024-11-19 03:16:23.554385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.159 [2024-11-19 03:16:23.554413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.159 [2024-11-19 03:16:23.554429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.159 [2024-11-19 03:16:23.554659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.159 [2024-11-19 03:16:23.554891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.159 [2024-11-19 03:16:23.554913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.159 [2024-11-19 03:16:23.554930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.159 [2024-11-19 03:16:23.554943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.159 [2024-11-19 03:16:23.567028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.159 [2024-11-19 03:16:23.567373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.159 [2024-11-19 03:16:23.567401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.159 [2024-11-19 03:16:23.567417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.159 [2024-11-19 03:16:23.567651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.159 [2024-11-19 03:16:23.567882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.159 [2024-11-19 03:16:23.567903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.159 [2024-11-19 03:16:23.567915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.159 [2024-11-19 03:16:23.567927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.159 [2024-11-19 03:16:23.580051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.159 [2024-11-19 03:16:23.580426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.159 [2024-11-19 03:16:23.580454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.159 [2024-11-19 03:16:23.580469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.159 [2024-11-19 03:16:23.580685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.159 [2024-11-19 03:16:23.580906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.159 [2024-11-19 03:16:23.580925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.159 [2024-11-19 03:16:23.580938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.159 [2024-11-19 03:16:23.580950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.159 [2024-11-19 03:16:23.593220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.159 [2024-11-19 03:16:23.593574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.159 [2024-11-19 03:16:23.593602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.159 [2024-11-19 03:16:23.593618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.159 [2024-11-19 03:16:23.593864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.159 [2024-11-19 03:16:23.594073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.159 [2024-11-19 03:16:23.594092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.159 [2024-11-19 03:16:23.594104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.159 [2024-11-19 03:16:23.594117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.159 [2024-11-19 03:16:23.606339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.159 [2024-11-19 03:16:23.606696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.159 [2024-11-19 03:16:23.606741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.159 [2024-11-19 03:16:23.606759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.159 [2024-11-19 03:16:23.607001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.159 [2024-11-19 03:16:23.607206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.159 [2024-11-19 03:16:23.607226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.159 [2024-11-19 03:16:23.607239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.159 [2024-11-19 03:16:23.607251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.159 [2024-11-19 03:16:23.619884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.159 [2024-11-19 03:16:23.620294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.159 [2024-11-19 03:16:23.620348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.159 [2024-11-19 03:16:23.620364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.159 [2024-11-19 03:16:23.620609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.159 [2024-11-19 03:16:23.620837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.159 [2024-11-19 03:16:23.620858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.159 [2024-11-19 03:16:23.620872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.159 [2024-11-19 03:16:23.620885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.159 [2024-11-19 03:16:23.633113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.159 [2024-11-19 03:16:23.633532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.159 [2024-11-19 03:16:23.633587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.159 [2024-11-19 03:16:23.633604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.159 [2024-11-19 03:16:23.633842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.159 [2024-11-19 03:16:23.634092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.159 [2024-11-19 03:16:23.634112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.159 [2024-11-19 03:16:23.634125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.159 [2024-11-19 03:16:23.634137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.159 [2024-11-19 03:16:23.646447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.159 [2024-11-19 03:16:23.646778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.159 [2024-11-19 03:16:23.646807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.159 [2024-11-19 03:16:23.646832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.159 [2024-11-19 03:16:23.647076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.159 [2024-11-19 03:16:23.647264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.159 [2024-11-19 03:16:23.647283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.159 [2024-11-19 03:16:23.647295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.160 [2024-11-19 03:16:23.647307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.160 [2024-11-19 03:16:23.659720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.160 [2024-11-19 03:16:23.660149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.160 [2024-11-19 03:16:23.660177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.160 [2024-11-19 03:16:23.660193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.160 [2024-11-19 03:16:23.660427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.160 [2024-11-19 03:16:23.660630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.160 [2024-11-19 03:16:23.660650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.160 [2024-11-19 03:16:23.660662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.160 [2024-11-19 03:16:23.660696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.160 [2024-11-19 03:16:23.673081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.160 [2024-11-19 03:16:23.673490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.160 [2024-11-19 03:16:23.673519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.160 [2024-11-19 03:16:23.673535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.160 [2024-11-19 03:16:23.673787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.160 [2024-11-19 03:16:23.674021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.160 [2024-11-19 03:16:23.674042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.160 [2024-11-19 03:16:23.674055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.160 [2024-11-19 03:16:23.674067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.160 [2024-11-19 03:16:23.686269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.160 [2024-11-19 03:16:23.686648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.160 [2024-11-19 03:16:23.686677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.160 [2024-11-19 03:16:23.686716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.160 [2024-11-19 03:16:23.686955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.160 [2024-11-19 03:16:23.687184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.160 [2024-11-19 03:16:23.687203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.160 [2024-11-19 03:16:23.687216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.160 [2024-11-19 03:16:23.687228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.160 [2024-11-19 03:16:23.699470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.160 [2024-11-19 03:16:23.699880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.160 [2024-11-19 03:16:23.699908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.160 [2024-11-19 03:16:23.699925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.160 [2024-11-19 03:16:23.700176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.160 [2024-11-19 03:16:23.700378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.160 [2024-11-19 03:16:23.700398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.160 [2024-11-19 03:16:23.700410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.160 [2024-11-19 03:16:23.700422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.160 [2024-11-19 03:16:23.712600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.160 [2024-11-19 03:16:23.712968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.160 [2024-11-19 03:16:23.712996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.160 [2024-11-19 03:16:23.713012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.160 [2024-11-19 03:16:23.713246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.160 [2024-11-19 03:16:23.713450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.160 [2024-11-19 03:16:23.713469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.160 [2024-11-19 03:16:23.713481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.160 [2024-11-19 03:16:23.713493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.160 [2024-11-19 03:16:23.726150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.160 [2024-11-19 03:16:23.726499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.160 [2024-11-19 03:16:23.726529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.160 [2024-11-19 03:16:23.726561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.160 [2024-11-19 03:16:23.726783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.160 [2024-11-19 03:16:23.727016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.160 [2024-11-19 03:16:23.727052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.160 [2024-11-19 03:16:23.727066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.160 [2024-11-19 03:16:23.727083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.160 [2024-11-19 03:16:23.739345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.160 [2024-11-19 03:16:23.739728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.160 [2024-11-19 03:16:23.739758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.160 [2024-11-19 03:16:23.739775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.160 [2024-11-19 03:16:23.740006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.160 [2024-11-19 03:16:23.740223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.160 [2024-11-19 03:16:23.740244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.160 [2024-11-19 03:16:23.740258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.160 [2024-11-19 03:16:23.740271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.420 [2024-11-19 03:16:23.752453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.420 [2024-11-19 03:16:23.752847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.420 [2024-11-19 03:16:23.752877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.420 [2024-11-19 03:16:23.752894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.420 [2024-11-19 03:16:23.753145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.420 [2024-11-19 03:16:23.753334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.420 [2024-11-19 03:16:23.753355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.420 [2024-11-19 03:16:23.753368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.420 [2024-11-19 03:16:23.753381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.420 [2024-11-19 03:16:23.765640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.420 [2024-11-19 03:16:23.765991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.420 [2024-11-19 03:16:23.766019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.420 [2024-11-19 03:16:23.766035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.420 [2024-11-19 03:16:23.766270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.420 [2024-11-19 03:16:23.766472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.420 [2024-11-19 03:16:23.766492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.420 [2024-11-19 03:16:23.766505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.420 [2024-11-19 03:16:23.766517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.420 [2024-11-19 03:16:23.778696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.420 [2024-11-19 03:16:23.779057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.420 [2024-11-19 03:16:23.779101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.420 [2024-11-19 03:16:23.779117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.420 [2024-11-19 03:16:23.779363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.420 [2024-11-19 03:16:23.779567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.420 [2024-11-19 03:16:23.779586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.420 [2024-11-19 03:16:23.779599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.420 [2024-11-19 03:16:23.779612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.420 [2024-11-19 03:16:23.791752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.420 [2024-11-19 03:16:23.792159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.420 [2024-11-19 03:16:23.792187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.420 [2024-11-19 03:16:23.792203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.420 [2024-11-19 03:16:23.792438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.420 [2024-11-19 03:16:23.792641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.420 [2024-11-19 03:16:23.792661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.420 [2024-11-19 03:16:23.792673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.420 [2024-11-19 03:16:23.792685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.420 [2024-11-19 03:16:23.804778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.420 [2024-11-19 03:16:23.805132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.420 [2024-11-19 03:16:23.805161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.420 [2024-11-19 03:16:23.805177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.420 [2024-11-19 03:16:23.805411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.420 [2024-11-19 03:16:23.805614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.420 [2024-11-19 03:16:23.805634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.420 [2024-11-19 03:16:23.805647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.420 [2024-11-19 03:16:23.805659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.420 [2024-11-19 03:16:23.817871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.420 [2024-11-19 03:16:23.818214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.420 [2024-11-19 03:16:23.818242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.420 [2024-11-19 03:16:23.818262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.420 [2024-11-19 03:16:23.818497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.420 [2024-11-19 03:16:23.818725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.420 [2024-11-19 03:16:23.818746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.420 [2024-11-19 03:16:23.818759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.420 [2024-11-19 03:16:23.818771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.420 [2024-11-19 03:16:23.831105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.420 [2024-11-19 03:16:23.831477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.421 [2024-11-19 03:16:23.831504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.421 [2024-11-19 03:16:23.831520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.421 [2024-11-19 03:16:23.831748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.421 [2024-11-19 03:16:23.831956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.421 [2024-11-19 03:16:23.831975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.421 [2024-11-19 03:16:23.831987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.421 [2024-11-19 03:16:23.832000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.421 [2024-11-19 03:16:23.844096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.421 [2024-11-19 03:16:23.844438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.421 [2024-11-19 03:16:23.844466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.421 [2024-11-19 03:16:23.844483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.421 [2024-11-19 03:16:23.844731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.421 [2024-11-19 03:16:23.844938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.421 [2024-11-19 03:16:23.844957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.421 [2024-11-19 03:16:23.844970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.421 [2024-11-19 03:16:23.844981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.421 [2024-11-19 03:16:23.857116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.421 [2024-11-19 03:16:23.857524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.421 [2024-11-19 03:16:23.857567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.421 [2024-11-19 03:16:23.857583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.421 [2024-11-19 03:16:23.857830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.421 [2024-11-19 03:16:23.858057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.421 [2024-11-19 03:16:23.858082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.421 [2024-11-19 03:16:23.858095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.421 [2024-11-19 03:16:23.858107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.421 [2024-11-19 03:16:23.870560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.421 [2024-11-19 03:16:23.870930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.421 [2024-11-19 03:16:23.870961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.421 [2024-11-19 03:16:23.870978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.421 [2024-11-19 03:16:23.871228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.421 [2024-11-19 03:16:23.871432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.421 [2024-11-19 03:16:23.871453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.421 [2024-11-19 03:16:23.871466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.421 [2024-11-19 03:16:23.871479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.421 [2024-11-19 03:16:23.883582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.421 [2024-11-19 03:16:23.883922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.421 [2024-11-19 03:16:23.883951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.421 [2024-11-19 03:16:23.883967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.421 [2024-11-19 03:16:23.884189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.421 [2024-11-19 03:16:23.884392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.421 [2024-11-19 03:16:23.884412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.421 [2024-11-19 03:16:23.884425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.421 [2024-11-19 03:16:23.884437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.421 [2024-11-19 03:16:23.896670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.421 [2024-11-19 03:16:23.897078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.421 [2024-11-19 03:16:23.897106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.421 [2024-11-19 03:16:23.897122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.421 [2024-11-19 03:16:23.897353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.421 [2024-11-19 03:16:23.897556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.421 [2024-11-19 03:16:23.897576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.421 [2024-11-19 03:16:23.897589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.421 [2024-11-19 03:16:23.897606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.421 [2024-11-19 03:16:23.910017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.421 [2024-11-19 03:16:23.910347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.421 [2024-11-19 03:16:23.910376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.421 [2024-11-19 03:16:23.910392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.421 [2024-11-19 03:16:23.910615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.421 [2024-11-19 03:16:23.910867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.421 [2024-11-19 03:16:23.910891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.421 [2024-11-19 03:16:23.910907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.421 [2024-11-19 03:16:23.910921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.421 [2024-11-19 03:16:23.923535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.421 [2024-11-19 03:16:23.923856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.421 [2024-11-19 03:16:23.923887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.421 [2024-11-19 03:16:23.923904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.421 [2024-11-19 03:16:23.924140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.421 [2024-11-19 03:16:23.924349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.421 [2024-11-19 03:16:23.924370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.421 [2024-11-19 03:16:23.924382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.421 [2024-11-19 03:16:23.924394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.421 [2024-11-19 03:16:23.937160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.421 [2024-11-19 03:16:23.937474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.421 [2024-11-19 03:16:23.937511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.421 [2024-11-19 03:16:23.937545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.422 [2024-11-19 03:16:23.937781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.422 [2024-11-19 03:16:23.938018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.422 [2024-11-19 03:16:23.938053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.422 [2024-11-19 03:16:23.938066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.422 [2024-11-19 03:16:23.938078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.422 [2024-11-19 03:16:23.950402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.422 [2024-11-19 03:16:23.950755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.422 [2024-11-19 03:16:23.950786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.422 [2024-11-19 03:16:23.950802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.422 [2024-11-19 03:16:23.951044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.422 [2024-11-19 03:16:23.951233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.422 [2024-11-19 03:16:23.951253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.422 [2024-11-19 03:16:23.951265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.422 [2024-11-19 03:16:23.951277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.422 [2024-11-19 03:16:23.963678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.422 [2024-11-19 03:16:23.964109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.422 [2024-11-19 03:16:23.964137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.422 [2024-11-19 03:16:23.964153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.422 [2024-11-19 03:16:23.964400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.422 [2024-11-19 03:16:23.964587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.422 [2024-11-19 03:16:23.964607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.422 [2024-11-19 03:16:23.964620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.422 [2024-11-19 03:16:23.964632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.422 [2024-11-19 03:16:23.977208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.422 [2024-11-19 03:16:23.977560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.422 [2024-11-19 03:16:23.977589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.422 [2024-11-19 03:16:23.977606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.422 [2024-11-19 03:16:23.977869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.422 [2024-11-19 03:16:23.978091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.422 [2024-11-19 03:16:23.978110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.422 [2024-11-19 03:16:23.978124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.422 [2024-11-19 03:16:23.978136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.422 [2024-11-19 03:16:23.990454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.422 [2024-11-19 03:16:23.990801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.422 [2024-11-19 03:16:23.990829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.422 [2024-11-19 03:16:23.990845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.422 [2024-11-19 03:16:23.991087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.422 [2024-11-19 03:16:23.991290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.422 [2024-11-19 03:16:23.991309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.422 [2024-11-19 03:16:23.991321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.422 [2024-11-19 03:16:23.991333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.422 [2024-11-19 03:16:24.003574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.422 [2024-11-19 03:16:24.004015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.422 [2024-11-19 03:16:24.004043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.422 [2024-11-19 03:16:24.004059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.422 [2024-11-19 03:16:24.004295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.422 [2024-11-19 03:16:24.004483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.422 [2024-11-19 03:16:24.004502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.422 [2024-11-19 03:16:24.004514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.422 [2024-11-19 03:16:24.004526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.422 [2024-11-19 03:16:24.016642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.422 [2024-11-19 03:16:24.017158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.422 [2024-11-19 03:16:24.017206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.422 [2024-11-19 03:16:24.017222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.422 [2024-11-19 03:16:24.017453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.422 [2024-11-19 03:16:24.017656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.422 [2024-11-19 03:16:24.017676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.422 [2024-11-19 03:16:24.017693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.422 [2024-11-19 03:16:24.017721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.422 [2024-11-19 03:16:24.029779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.422 [2024-11-19 03:16:24.030187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.422 [2024-11-19 03:16:24.030215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.422 [2024-11-19 03:16:24.030231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.422 [2024-11-19 03:16:24.030466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.422 [2024-11-19 03:16:24.030669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.422 [2024-11-19 03:16:24.030714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.422 [2024-11-19 03:16:24.030731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.422 [2024-11-19 03:16:24.030743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.683 [2024-11-19 03:16:24.043234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.683 [2024-11-19 03:16:24.043632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.683 [2024-11-19 03:16:24.043659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.683 [2024-11-19 03:16:24.043675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.683 [2024-11-19 03:16:24.043911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.683 [2024-11-19 03:16:24.044159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.683 [2024-11-19 03:16:24.044179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.683 [2024-11-19 03:16:24.044191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.683 [2024-11-19 03:16:24.044203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.683 [2024-11-19 03:16:24.056523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.683 [2024-11-19 03:16:24.056840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.683 [2024-11-19 03:16:24.056870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.683 [2024-11-19 03:16:24.056887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.683 [2024-11-19 03:16:24.057138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.683 [2024-11-19 03:16:24.057342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.683 [2024-11-19 03:16:24.057361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.683 [2024-11-19 03:16:24.057373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.683 [2024-11-19 03:16:24.057384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.683 [2024-11-19 03:16:24.069789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.683 [2024-11-19 03:16:24.070207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.683 [2024-11-19 03:16:24.070234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.683 [2024-11-19 03:16:24.070249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.683 [2024-11-19 03:16:24.070484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.683 [2024-11-19 03:16:24.070713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.683 [2024-11-19 03:16:24.070747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.683 [2024-11-19 03:16:24.070763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.683 [2024-11-19 03:16:24.070781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.683 [2024-11-19 03:16:24.082862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.683 [2024-11-19 03:16:24.083231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.683 [2024-11-19 03:16:24.083269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.683 [2024-11-19 03:16:24.083303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.683 [2024-11-19 03:16:24.083532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.683 [2024-11-19 03:16:24.083746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.683 [2024-11-19 03:16:24.083767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.683 [2024-11-19 03:16:24.083780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.683 [2024-11-19 03:16:24.083792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.683 [2024-11-19 03:16:24.095963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.683 [2024-11-19 03:16:24.096366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.683 [2024-11-19 03:16:24.096394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.683 [2024-11-19 03:16:24.096409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.683 [2024-11-19 03:16:24.096644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.683 [2024-11-19 03:16:24.096878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.683 [2024-11-19 03:16:24.096898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.683 [2024-11-19 03:16:24.096911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.683 [2024-11-19 03:16:24.096923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.683 [2024-11-19 03:16:24.109000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.683 [2024-11-19 03:16:24.109343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.683 [2024-11-19 03:16:24.109371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.683 [2024-11-19 03:16:24.109387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.683 [2024-11-19 03:16:24.109623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.683 [2024-11-19 03:16:24.109876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.683 [2024-11-19 03:16:24.109897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.683 [2024-11-19 03:16:24.109911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.683 [2024-11-19 03:16:24.109923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.684 [2024-11-19 03:16:24.122160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.684 [2024-11-19 03:16:24.122502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.684 [2024-11-19 03:16:24.122534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.684 [2024-11-19 03:16:24.122551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.684 [2024-11-19 03:16:24.122797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.684 [2024-11-19 03:16:24.123012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.684 [2024-11-19 03:16:24.123032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.684 [2024-11-19 03:16:24.123045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.684 [2024-11-19 03:16:24.123070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.684 [2024-11-19 03:16:24.135208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.684 [2024-11-19 03:16:24.135561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.684 [2024-11-19 03:16:24.135612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.684 [2024-11-19 03:16:24.135647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.684 [2024-11-19 03:16:24.135930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.684 [2024-11-19 03:16:24.136142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.684 [2024-11-19 03:16:24.136161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.684 [2024-11-19 03:16:24.136174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.684 [2024-11-19 03:16:24.136186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.684 [2024-11-19 03:16:24.148265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.684 [2024-11-19 03:16:24.148681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.684 [2024-11-19 03:16:24.148736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.684 [2024-11-19 03:16:24.148753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.684 [2024-11-19 03:16:24.148989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.684 [2024-11-19 03:16:24.149191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.684 [2024-11-19 03:16:24.149209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.684 [2024-11-19 03:16:24.149222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.684 [2024-11-19 03:16:24.149234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.684 [2024-11-19 03:16:24.161261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.684 [2024-11-19 03:16:24.161612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.684 [2024-11-19 03:16:24.161660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.684 [2024-11-19 03:16:24.161676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.684 [2024-11-19 03:16:24.161937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.684 [2024-11-19 03:16:24.162143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.684 [2024-11-19 03:16:24.162162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.684 [2024-11-19 03:16:24.162175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.684 [2024-11-19 03:16:24.162187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.684 [2024-11-19 03:16:24.174241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.684 [2024-11-19 03:16:24.174655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.684 [2024-11-19 03:16:24.174711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.684 [2024-11-19 03:16:24.174728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.684 [2024-11-19 03:16:24.174956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.684 [2024-11-19 03:16:24.175159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.684 [2024-11-19 03:16:24.175178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.684 [2024-11-19 03:16:24.175190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.684 [2024-11-19 03:16:24.175202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.684 [2024-11-19 03:16:24.187275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.684 [2024-11-19 03:16:24.187628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.684 [2024-11-19 03:16:24.187676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.684 [2024-11-19 03:16:24.187701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.684 [2024-11-19 03:16:24.187972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.684 [2024-11-19 03:16:24.188176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.684 [2024-11-19 03:16:24.188195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.684 [2024-11-19 03:16:24.188207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.684 [2024-11-19 03:16:24.188219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.684 [2024-11-19 03:16:24.200407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.684 [2024-11-19 03:16:24.200749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.684 [2024-11-19 03:16:24.200776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.684 [2024-11-19 03:16:24.200791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.684 [2024-11-19 03:16:24.201020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.684 [2024-11-19 03:16:24.201223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.684 [2024-11-19 03:16:24.201247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.684 [2024-11-19 03:16:24.201260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.684 [2024-11-19 03:16:24.201272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.684 [2024-11-19 03:16:24.213404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.684 [2024-11-19 03:16:24.213754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.684 [2024-11-19 03:16:24.213782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.684 [2024-11-19 03:16:24.213798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.684 [2024-11-19 03:16:24.214034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.684 [2024-11-19 03:16:24.214238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.684 [2024-11-19 03:16:24.214257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.684 [2024-11-19 03:16:24.214269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.684 [2024-11-19 03:16:24.214281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.684 [2024-11-19 03:16:24.226465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.684 [2024-11-19 03:16:24.226884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.684 [2024-11-19 03:16:24.226912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.684 [2024-11-19 03:16:24.226927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.684 [2024-11-19 03:16:24.227157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.684 [2024-11-19 03:16:24.227361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.684 [2024-11-19 03:16:24.227380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.684 [2024-11-19 03:16:24.227393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.684 [2024-11-19 03:16:24.227404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.684 [2024-11-19 03:16:24.239566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.684 [2024-11-19 03:16:24.239997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.684 [2024-11-19 03:16:24.240041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.684 [2024-11-19 03:16:24.240057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.684 [2024-11-19 03:16:24.240289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.684 [2024-11-19 03:16:24.240493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.685 [2024-11-19 03:16:24.240512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.685 [2024-11-19 03:16:24.240525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.685 [2024-11-19 03:16:24.240536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.685 [2024-11-19 03:16:24.252711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.685 [2024-11-19 03:16:24.253124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.685 [2024-11-19 03:16:24.253152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.685 [2024-11-19 03:16:24.253168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.685 [2024-11-19 03:16:24.253388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.685 [2024-11-19 03:16:24.253590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.685 [2024-11-19 03:16:24.253609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.685 [2024-11-19 03:16:24.253622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.685 [2024-11-19 03:16:24.253634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.685 [2024-11-19 03:16:24.265897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.685 [2024-11-19 03:16:24.266257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.685 [2024-11-19 03:16:24.266284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.685 [2024-11-19 03:16:24.266300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.685 [2024-11-19 03:16:24.266536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.685 [2024-11-19 03:16:24.266784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.685 [2024-11-19 03:16:24.266805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.685 [2024-11-19 03:16:24.266819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.685 [2024-11-19 03:16:24.266832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.685 [2024-11-19 03:16:24.279000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.685 [2024-11-19 03:16:24.279406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.685 [2024-11-19 03:16:24.279433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.685 [2024-11-19 03:16:24.279449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.685 [2024-11-19 03:16:24.279685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.685 [2024-11-19 03:16:24.279906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.685 [2024-11-19 03:16:24.279926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.685 [2024-11-19 03:16:24.279939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.685 [2024-11-19 03:16:24.279951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.685 [2024-11-19 03:16:24.292081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.685 [2024-11-19 03:16:24.292423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.685 [2024-11-19 03:16:24.292457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.685 [2024-11-19 03:16:24.292474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.685 [2024-11-19 03:16:24.292708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.685 [2024-11-19 03:16:24.292922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.685 [2024-11-19 03:16:24.292943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.685 [2024-11-19 03:16:24.292956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.685 [2024-11-19 03:16:24.292967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.945 [2024-11-19 03:16:24.305181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.945 [2024-11-19 03:16:24.305612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.945 [2024-11-19 03:16:24.305641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.945 [2024-11-19 03:16:24.305657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.945 [2024-11-19 03:16:24.305895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.945 [2024-11-19 03:16:24.306119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.945 [2024-11-19 03:16:24.306139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.945 [2024-11-19 03:16:24.306151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.945 [2024-11-19 03:16:24.306178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.945 [2024-11-19 03:16:24.318301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.945 [2024-11-19 03:16:24.318644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.945 [2024-11-19 03:16:24.318671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.945 [2024-11-19 03:16:24.318686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.945 [2024-11-19 03:16:24.318959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.945 [2024-11-19 03:16:24.319178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.945 [2024-11-19 03:16:24.319197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.945 [2024-11-19 03:16:24.319210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.945 [2024-11-19 03:16:24.319221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.945 [2024-11-19 03:16:24.331410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.945 [2024-11-19 03:16:24.331832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.945 [2024-11-19 03:16:24.331860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.945 [2024-11-19 03:16:24.331876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.945 [2024-11-19 03:16:24.332115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.945 [2024-11-19 03:16:24.332320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.945 [2024-11-19 03:16:24.332339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.945 [2024-11-19 03:16:24.332351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.945 [2024-11-19 03:16:24.332363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.945 [2024-11-19 03:16:24.344503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.945 [2024-11-19 03:16:24.344858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.945 [2024-11-19 03:16:24.344924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.945 [2024-11-19 03:16:24.344940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.945 [2024-11-19 03:16:24.345185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.945 [2024-11-19 03:16:24.345389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.945 [2024-11-19 03:16:24.345407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.945 [2024-11-19 03:16:24.345420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.945 [2024-11-19 03:16:24.345431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.945 4317.00 IOPS, 16.86 MiB/s [2024-11-19T02:16:24.560Z] [2024-11-19 03:16:24.357481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.945 [2024-11-19 03:16:24.357877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.945 [2024-11-19 03:16:24.357929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.945 [2024-11-19 03:16:24.357945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.945 [2024-11-19 03:16:24.358188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.945 [2024-11-19 03:16:24.358375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.945 [2024-11-19 03:16:24.358394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.945 [2024-11-19 03:16:24.358407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.945 [2024-11-19 03:16:24.358418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.945 [2024-11-19 03:16:24.370573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.945 [2024-11-19 03:16:24.370976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.945 [2024-11-19 03:16:24.371005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.945 [2024-11-19 03:16:24.371022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.945 [2024-11-19 03:16:24.371271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.945 [2024-11-19 03:16:24.371495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.945 [2024-11-19 03:16:24.371521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.945 [2024-11-19 03:16:24.371535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.945 [2024-11-19 03:16:24.371547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.945 [2024-11-19 03:16:24.383841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.945 [2024-11-19 03:16:24.384223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.945 [2024-11-19 03:16:24.384249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.945 [2024-11-19 03:16:24.384265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.945 [2024-11-19 03:16:24.384494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.945 [2024-11-19 03:16:24.384722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.945 [2024-11-19 03:16:24.384758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.945 [2024-11-19 03:16:24.384771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.945 [2024-11-19 03:16:24.384783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.945 [2024-11-19 03:16:24.397011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.945 [2024-11-19 03:16:24.397417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.945 [2024-11-19 03:16:24.397445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.945 [2024-11-19 03:16:24.397461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.945 [2024-11-19 03:16:24.397703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.946 [2024-11-19 03:16:24.397917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.946 [2024-11-19 03:16:24.397937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.946 [2024-11-19 03:16:24.397949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.946 [2024-11-19 03:16:24.397962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.946 [2024-11-19 03:16:24.410114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.946 [2024-11-19 03:16:24.410518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.946 [2024-11-19 03:16:24.410546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.946 [2024-11-19 03:16:24.410562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.946 [2024-11-19 03:16:24.410825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.946 [2024-11-19 03:16:24.411055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.946 [2024-11-19 03:16:24.411074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.946 [2024-11-19 03:16:24.411087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.946 [2024-11-19 03:16:24.411099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.946 [2024-11-19 03:16:24.423241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.946 [2024-11-19 03:16:24.423656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.946 [2024-11-19 03:16:24.423684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.946 [2024-11-19 03:16:24.423723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.946 [2024-11-19 03:16:24.423967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.946 [2024-11-19 03:16:24.424210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.946 [2024-11-19 03:16:24.424229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.946 [2024-11-19 03:16:24.424241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.946 [2024-11-19 03:16:24.424252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.946 [2024-11-19 03:16:24.436364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.946 [2024-11-19 03:16:24.436669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.946 [2024-11-19 03:16:24.436702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.946 [2024-11-19 03:16:24.436718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.946 [2024-11-19 03:16:24.436935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.946 [2024-11-19 03:16:24.437138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.946 [2024-11-19 03:16:24.437157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.946 [2024-11-19 03:16:24.437170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.946 [2024-11-19 03:16:24.437181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.946 [2024-11-19 03:16:24.449382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.946 [2024-11-19 03:16:24.449695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.946 [2024-11-19 03:16:24.449723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.946 [2024-11-19 03:16:24.449738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.946 [2024-11-19 03:16:24.449955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.946 [2024-11-19 03:16:24.450157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.946 [2024-11-19 03:16:24.450176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.946 [2024-11-19 03:16:24.450188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.946 [2024-11-19 03:16:24.450200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.946 [2024-11-19 03:16:24.462430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.946 [2024-11-19 03:16:24.462838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.946 [2024-11-19 03:16:24.462872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.946 [2024-11-19 03:16:24.462889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.946 [2024-11-19 03:16:24.463130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.946 [2024-11-19 03:16:24.463332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.946 [2024-11-19 03:16:24.463352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.946 [2024-11-19 03:16:24.463364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.946 [2024-11-19 03:16:24.463375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.946 [2024-11-19 03:16:24.475400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.946 [2024-11-19 03:16:24.475740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.946 [2024-11-19 03:16:24.475767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.946 [2024-11-19 03:16:24.475783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.946 [2024-11-19 03:16:24.475997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.946 [2024-11-19 03:16:24.476201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.946 [2024-11-19 03:16:24.476220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.946 [2024-11-19 03:16:24.476232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.946 [2024-11-19 03:16:24.476243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.946 [2024-11-19 03:16:24.488509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.946 [2024-11-19 03:16:24.488919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.946 [2024-11-19 03:16:24.488947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.946 [2024-11-19 03:16:24.488963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.946 [2024-11-19 03:16:24.489197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.946 [2024-11-19 03:16:24.489385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.946 [2024-11-19 03:16:24.489404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.946 [2024-11-19 03:16:24.489417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.946 [2024-11-19 03:16:24.489428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.946 [2024-11-19 03:16:24.501504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.946 [2024-11-19 03:16:24.501915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.946 [2024-11-19 03:16:24.501943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.946 [2024-11-19 03:16:24.501958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.946 [2024-11-19 03:16:24.502199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.946 [2024-11-19 03:16:24.502402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.946 [2024-11-19 03:16:24.502421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.946 [2024-11-19 03:16:24.502434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.946 [2024-11-19 03:16:24.502445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.946 [2024-11-19 03:16:24.514588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.946 [2024-11-19 03:16:24.514887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.947 [2024-11-19 03:16:24.514929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.947 [2024-11-19 03:16:24.514946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.947 [2024-11-19 03:16:24.515161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.947 [2024-11-19 03:16:24.515365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.947 [2024-11-19 03:16:24.515384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.947 [2024-11-19 03:16:24.515396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.947 [2024-11-19 03:16:24.515408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.947 [2024-11-19 03:16:24.527683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.947 [2024-11-19 03:16:24.527994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.947 [2024-11-19 03:16:24.528021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.947 [2024-11-19 03:16:24.528036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.947 [2024-11-19 03:16:24.528247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.947 [2024-11-19 03:16:24.528451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.947 [2024-11-19 03:16:24.528470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.947 [2024-11-19 03:16:24.528483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.947 [2024-11-19 03:16:24.528494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.947 [2024-11-19 03:16:24.540875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.947 [2024-11-19 03:16:24.541238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.947 [2024-11-19 03:16:24.541266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.947 [2024-11-19 03:16:24.541282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.947 [2024-11-19 03:16:24.541517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.947 [2024-11-19 03:16:24.541746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.947 [2024-11-19 03:16:24.541780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.947 [2024-11-19 03:16:24.541799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.947 [2024-11-19 03:16:24.541813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.947 [2024-11-19 03:16:24.553975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.947 [2024-11-19 03:16:24.554333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.947 [2024-11-19 03:16:24.554360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:13.947 [2024-11-19 03:16:24.554376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:13.947 [2024-11-19 03:16:24.554611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:13.947 [2024-11-19 03:16:24.554851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.947 [2024-11-19 03:16:24.554873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.947 [2024-11-19 03:16:24.554887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.947 [2024-11-19 03:16:24.554900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.207 [2024-11-19 03:16:24.567209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.207 [2024-11-19 03:16:24.567596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.207 [2024-11-19 03:16:24.567623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.207 [2024-11-19 03:16:24.567639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.207 [2024-11-19 03:16:24.567917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.207 [2024-11-19 03:16:24.568160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.207 [2024-11-19 03:16:24.568180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.207 [2024-11-19 03:16:24.568193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.207 [2024-11-19 03:16:24.568205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.207 [2024-11-19 03:16:24.580399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.207 [2024-11-19 03:16:24.580741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.207 [2024-11-19 03:16:24.580768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.207 [2024-11-19 03:16:24.580784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.207 [2024-11-19 03:16:24.581012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.207 [2024-11-19 03:16:24.581216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.207 [2024-11-19 03:16:24.581235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.207 [2024-11-19 03:16:24.581247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.207 [2024-11-19 03:16:24.581259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.207 [2024-11-19 03:16:24.593492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.207 [2024-11-19 03:16:24.593839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.207 [2024-11-19 03:16:24.593866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.207 [2024-11-19 03:16:24.593881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.207 [2024-11-19 03:16:24.594110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.207 [2024-11-19 03:16:24.594314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.207 [2024-11-19 03:16:24.594333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.207 [2024-11-19 03:16:24.594345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.207 [2024-11-19 03:16:24.594356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.207 [2024-11-19 03:16:24.606540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.207 [2024-11-19 03:16:24.606888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.207 [2024-11-19 03:16:24.606916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.207 [2024-11-19 03:16:24.606931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.207 [2024-11-19 03:16:24.607167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.207 [2024-11-19 03:16:24.607355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.207 [2024-11-19 03:16:24.607374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.208 [2024-11-19 03:16:24.607386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.208 [2024-11-19 03:16:24.607398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.208 [2024-11-19 03:16:24.619586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.208 [2024-11-19 03:16:24.619950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.208 [2024-11-19 03:16:24.619977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.208 [2024-11-19 03:16:24.619993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.208 [2024-11-19 03:16:24.620234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.208 [2024-11-19 03:16:24.620422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.208 [2024-11-19 03:16:24.620441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.208 [2024-11-19 03:16:24.620454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.208 [2024-11-19 03:16:24.620466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.208 [2024-11-19 03:16:24.632761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.208 [2024-11-19 03:16:24.633081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.208 [2024-11-19 03:16:24.633109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.208 [2024-11-19 03:16:24.633130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.208 [2024-11-19 03:16:24.633346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.208 [2024-11-19 03:16:24.633567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.208 [2024-11-19 03:16:24.633587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.208 [2024-11-19 03:16:24.633599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.208 [2024-11-19 03:16:24.633612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.208 [2024-11-19 03:16:24.645778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.208 [2024-11-19 03:16:24.646121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.208 [2024-11-19 03:16:24.646148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.208 [2024-11-19 03:16:24.646163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.208 [2024-11-19 03:16:24.646379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.208 [2024-11-19 03:16:24.646584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.208 [2024-11-19 03:16:24.646603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.208 [2024-11-19 03:16:24.646616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.208 [2024-11-19 03:16:24.646627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.208 [2024-11-19 03:16:24.659145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.208 [2024-11-19 03:16:24.659550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.208 [2024-11-19 03:16:24.659578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.208 [2024-11-19 03:16:24.659594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.208 [2024-11-19 03:16:24.659831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.208 [2024-11-19 03:16:24.660056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.208 [2024-11-19 03:16:24.660076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.208 [2024-11-19 03:16:24.660088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.208 [2024-11-19 03:16:24.660100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.208 [2024-11-19 03:16:24.672427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.208 [2024-11-19 03:16:24.672841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.208 [2024-11-19 03:16:24.672890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.208 [2024-11-19 03:16:24.672907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.208 [2024-11-19 03:16:24.673142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.208 [2024-11-19 03:16:24.673335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.208 [2024-11-19 03:16:24.673354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.208 [2024-11-19 03:16:24.673366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.208 [2024-11-19 03:16:24.673378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.208 [2024-11-19 03:16:24.685712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.208 [2024-11-19 03:16:24.686196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.208 [2024-11-19 03:16:24.686249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.208 [2024-11-19 03:16:24.686264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.208 [2024-11-19 03:16:24.686507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.208 [2024-11-19 03:16:24.686719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.208 [2024-11-19 03:16:24.686754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.208 [2024-11-19 03:16:24.686768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.208 [2024-11-19 03:16:24.686780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.208 [2024-11-19 03:16:24.698897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.208 [2024-11-19 03:16:24.699282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.208 [2024-11-19 03:16:24.699309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.208 [2024-11-19 03:16:24.699324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.208 [2024-11-19 03:16:24.699539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.208 [2024-11-19 03:16:24.699786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.208 [2024-11-19 03:16:24.699806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.208 [2024-11-19 03:16:24.699820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.208 [2024-11-19 03:16:24.699832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.208 [2024-11-19 03:16:24.711975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.208 [2024-11-19 03:16:24.712378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.208 [2024-11-19 03:16:24.712406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.208 [2024-11-19 03:16:24.712421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.208 [2024-11-19 03:16:24.712656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.208 [2024-11-19 03:16:24.712888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.208 [2024-11-19 03:16:24.712909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.208 [2024-11-19 03:16:24.712927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.208 [2024-11-19 03:16:24.712940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.208 [2024-11-19 03:16:24.725066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.208 [2024-11-19 03:16:24.725469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.208 [2024-11-19 03:16:24.725495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.208 [2024-11-19 03:16:24.725511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.208 [2024-11-19 03:16:24.725747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.209 [2024-11-19 03:16:24.725942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.209 [2024-11-19 03:16:24.725961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.209 [2024-11-19 03:16:24.725974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.209 [2024-11-19 03:16:24.725999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.209 [2024-11-19 03:16:24.738025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.209 [2024-11-19 03:16:24.738378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.209 [2024-11-19 03:16:24.738405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.209 [2024-11-19 03:16:24.738421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.209 [2024-11-19 03:16:24.738656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.209 [2024-11-19 03:16:24.738896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.209 [2024-11-19 03:16:24.738918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.209 [2024-11-19 03:16:24.738932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.209 [2024-11-19 03:16:24.738944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.209 [2024-11-19 03:16:24.751112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.209 [2024-11-19 03:16:24.751483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.209 [2024-11-19 03:16:24.751511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.209 [2024-11-19 03:16:24.751526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.209 [2024-11-19 03:16:24.751750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.209 [2024-11-19 03:16:24.751950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.209 [2024-11-19 03:16:24.751970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.209 [2024-11-19 03:16:24.751998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.209 [2024-11-19 03:16:24.752010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.209 [2024-11-19 03:16:24.764340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.209 [2024-11-19 03:16:24.764650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.209 [2024-11-19 03:16:24.764679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.209 [2024-11-19 03:16:24.764718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.209 [2024-11-19 03:16:24.764954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.209 [2024-11-19 03:16:24.765158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.209 [2024-11-19 03:16:24.765177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.209 [2024-11-19 03:16:24.765190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.209 [2024-11-19 03:16:24.765201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.209 [2024-11-19 03:16:24.777518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.209 [2024-11-19 03:16:24.777958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.209 [2024-11-19 03:16:24.778018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.209 [2024-11-19 03:16:24.778035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.209 [2024-11-19 03:16:24.778276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.209 [2024-11-19 03:16:24.778463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.209 [2024-11-19 03:16:24.778483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.209 [2024-11-19 03:16:24.778495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.209 [2024-11-19 03:16:24.778507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.209 [2024-11-19 03:16:24.790783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.209 [2024-11-19 03:16:24.791169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.209 [2024-11-19 03:16:24.791207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.209 [2024-11-19 03:16:24.791223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.209 [2024-11-19 03:16:24.791456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.209 [2024-11-19 03:16:24.791659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.209 [2024-11-19 03:16:24.791687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.209 [2024-11-19 03:16:24.791723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.209 [2024-11-19 03:16:24.791736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.209 [2024-11-19 03:16:24.803943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.209 [2024-11-19 03:16:24.804364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.209 [2024-11-19 03:16:24.804391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.209 [2024-11-19 03:16:24.804422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.209 [2024-11-19 03:16:24.804655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.209 [2024-11-19 03:16:24.804894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.209 [2024-11-19 03:16:24.804915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.209 [2024-11-19 03:16:24.804930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.209 [2024-11-19 03:16:24.804942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.209 [2024-11-19 03:16:24.817035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.209 [2024-11-19 03:16:24.817418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.209 [2024-11-19 03:16:24.817451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.209 [2024-11-19 03:16:24.817467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.209 [2024-11-19 03:16:24.817682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.209 [2024-11-19 03:16:24.817919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.209 [2024-11-19 03:16:24.817939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.209 [2024-11-19 03:16:24.817953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.209 [2024-11-19 03:16:24.817965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 399851 Killed "${NVMF_APP[@]}" "$@" 00:35:14.470 03:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:14.470 03:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:14.470 03:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:14.470 03:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:14.470 03:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:14.470 [2024-11-19 03:16:24.830467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.470 [2024-11-19 03:16:24.830806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-19 03:16:24.830836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.470 [2024-11-19 03:16:24.830853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.470 [2024-11-19 03:16:24.831096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.470 [2024-11-19 03:16:24.831339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.470 [2024-11-19 03:16:24.831359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.470 [2024-11-19 03:16:24.831373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.470 [2024-11-19 03:16:24.831385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.470 03:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=400812 00:35:14.470 03:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:14.470 03:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 400812 00:35:14.470 03:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 400812 ']' 00:35:14.470 03:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:14.470 03:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:14.470 03:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:14.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:14.470 03:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:14.470 03:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:14.470 [2024-11-19 03:16:24.843868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.470 [2024-11-19 03:16:24.844254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-19 03:16:24.844286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.470 [2024-11-19 03:16:24.844302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.470 [2024-11-19 03:16:24.844544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.470 [2024-11-19 03:16:24.844775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.470 [2024-11-19 03:16:24.844796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.470 [2024-11-19 03:16:24.844810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.470 [2024-11-19 03:16:24.844823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.470 [2024-11-19 03:16:24.857345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.470 [2024-11-19 03:16:24.857663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-19 03:16:24.857727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.470 [2024-11-19 03:16:24.857744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.470 [2024-11-19 03:16:24.857958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.470 [2024-11-19 03:16:24.858181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.470 [2024-11-19 03:16:24.858201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.470 [2024-11-19 03:16:24.858214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.470 [2024-11-19 03:16:24.858226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.470 [2024-11-19 03:16:24.870656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.470 [2024-11-19 03:16:24.871065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-19 03:16:24.871094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.470 [2024-11-19 03:16:24.871110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.470 [2024-11-19 03:16:24.871362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.470 [2024-11-19 03:16:24.871562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.470 [2024-11-19 03:16:24.871581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.470 [2024-11-19 03:16:24.871595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.470 [2024-11-19 03:16:24.871607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.470 [2024-11-19 03:16:24.878946] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:14.470 [2024-11-19 03:16:24.879005] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:14.470 [2024-11-19 03:16:24.883957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.470 [2024-11-19 03:16:24.884304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-19 03:16:24.884331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.470 [2024-11-19 03:16:24.884347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.470 [2024-11-19 03:16:24.884576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.470 [2024-11-19 03:16:24.884818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.471 [2024-11-19 03:16:24.884839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.471 [2024-11-19 03:16:24.884852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.471 [2024-11-19 03:16:24.884864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.471 [2024-11-19 03:16:24.897412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.471 [2024-11-19 03:16:24.897860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-19 03:16:24.897889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.471 [2024-11-19 03:16:24.897906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.471 [2024-11-19 03:16:24.898148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.471 [2024-11-19 03:16:24.898346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.471 [2024-11-19 03:16:24.898364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.471 [2024-11-19 03:16:24.898377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.471 [2024-11-19 03:16:24.898388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.471 [2024-11-19 03:16:24.910620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.471 [2024-11-19 03:16:24.911002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-19 03:16:24.911030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.471 [2024-11-19 03:16:24.911046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.471 [2024-11-19 03:16:24.911293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.471 [2024-11-19 03:16:24.911501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.471 [2024-11-19 03:16:24.911520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.471 [2024-11-19 03:16:24.911532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.471 [2024-11-19 03:16:24.911544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.471 [2024-11-19 03:16:24.923991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.471 [2024-11-19 03:16:24.924317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-19 03:16:24.924345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.471 [2024-11-19 03:16:24.924361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.471 [2024-11-19 03:16:24.924589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.471 [2024-11-19 03:16:24.924853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.471 [2024-11-19 03:16:24.924875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.471 [2024-11-19 03:16:24.924888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.471 [2024-11-19 03:16:24.924901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.471 [2024-11-19 03:16:24.937208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.471 [2024-11-19 03:16:24.937576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-19 03:16:24.937618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.471 [2024-11-19 03:16:24.937633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.471 [2024-11-19 03:16:24.937895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.471 [2024-11-19 03:16:24.938112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.471 [2024-11-19 03:16:24.938131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.471 [2024-11-19 03:16:24.938144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.471 [2024-11-19 03:16:24.938155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.471 [2024-11-19 03:16:24.950492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.471 [2024-11-19 03:16:24.950906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-19 03:16:24.950935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.471 [2024-11-19 03:16:24.950952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.471 [2024-11-19 03:16:24.951180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.471 [2024-11-19 03:16:24.951393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.471 [2024-11-19 03:16:24.951420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.471 [2024-11-19 03:16:24.951433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.471 [2024-11-19 03:16:24.951445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.471 [2024-11-19 03:16:24.953412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:14.471 [2024-11-19 03:16:24.963776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.471 [2024-11-19 03:16:24.964281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-19 03:16:24.964319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.471 [2024-11-19 03:16:24.964339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.471 [2024-11-19 03:16:24.964590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.471 [2024-11-19 03:16:24.964822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.471 [2024-11-19 03:16:24.964843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.471 [2024-11-19 03:16:24.964859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.471 [2024-11-19 03:16:24.964875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.471 [2024-11-19 03:16:24.977164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.471 [2024-11-19 03:16:24.977586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-19 03:16:24.977618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.471 [2024-11-19 03:16:24.977636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.471 [2024-11-19 03:16:24.977876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.471 [2024-11-19 03:16:24.978099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.471 [2024-11-19 03:16:24.978119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.471 [2024-11-19 03:16:24.978133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.471 [2024-11-19 03:16:24.978147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.471 [2024-11-19 03:16:24.990410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.471 [2024-11-19 03:16:24.990763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-19 03:16:24.990792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.471 [2024-11-19 03:16:24.990808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.471 [2024-11-19 03:16:24.991038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.471 [2024-11-19 03:16:24.991253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.471 [2024-11-19 03:16:24.991272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.471 [2024-11-19 03:16:24.991296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.471 [2024-11-19 03:16:24.991308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.471 [2024-11-19 03:16:24.998547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:14.472 [2024-11-19 03:16:24.998577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:14.472 [2024-11-19 03:16:24.998605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:14.472 [2024-11-19 03:16:24.998616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:14.472 [2024-11-19 03:16:24.998625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:14.472 [2024-11-19 03:16:24.999956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:14.472 [2024-11-19 03:16:25.000014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:14.472 [2024-11-19 03:16:25.000018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:14.472 [2024-11-19 03:16:25.003969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.472 [2024-11-19 03:16:25.004386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-19 03:16:25.004417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.472 [2024-11-19 03:16:25.004436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.472 [2024-11-19 03:16:25.004669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.472 [2024-11-19 03:16:25.004914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.472 [2024-11-19 03:16:25.004937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.472 [2024-11-19 03:16:25.004952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.472 [2024-11-19 03:16:25.004968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.472 [2024-11-19 03:16:25.017500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.472 [2024-11-19 03:16:25.018012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-19 03:16:25.018049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.472 [2024-11-19 03:16:25.018070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.472 [2024-11-19 03:16:25.018308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.472 [2024-11-19 03:16:25.018526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.472 [2024-11-19 03:16:25.018547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.472 [2024-11-19 03:16:25.018564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.472 [2024-11-19 03:16:25.018579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.472 [2024-11-19 03:16:25.031145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.472 [2024-11-19 03:16:25.031662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-19 03:16:25.031707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.472 [2024-11-19 03:16:25.031729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.472 [2024-11-19 03:16:25.031963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.472 [2024-11-19 03:16:25.032196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.472 [2024-11-19 03:16:25.032217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.472 [2024-11-19 03:16:25.032234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.472 [2024-11-19 03:16:25.032249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.472 [2024-11-19 03:16:25.044814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.472 [2024-11-19 03:16:25.045348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-19 03:16:25.045383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.472 [2024-11-19 03:16:25.045402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.472 [2024-11-19 03:16:25.045642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.472 [2024-11-19 03:16:25.045900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.472 [2024-11-19 03:16:25.045924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.472 [2024-11-19 03:16:25.045941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.472 [2024-11-19 03:16:25.045957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.472 [2024-11-19 03:16:25.058420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.472 [2024-11-19 03:16:25.058877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-19 03:16:25.058909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.472 [2024-11-19 03:16:25.058929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.472 [2024-11-19 03:16:25.059174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.472 [2024-11-19 03:16:25.059389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.472 [2024-11-19 03:16:25.059411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.472 [2024-11-19 03:16:25.059427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.472 [2024-11-19 03:16:25.059458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.472 [2024-11-19 03:16:25.072358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.472 [2024-11-19 03:16:25.072934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-19 03:16:25.072984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.472 [2024-11-19 03:16:25.073004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.472 [2024-11-19 03:16:25.073243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.472 [2024-11-19 03:16:25.073460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.472 [2024-11-19 03:16:25.073491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.472 [2024-11-19 03:16:25.073509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.472 [2024-11-19 03:16:25.073524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.472 [2024-11-19 03:16:25.086032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.472 [2024-11-19 03:16:25.086463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-19 03:16:25.086495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.472 [2024-11-19 03:16:25.086512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.472 [2024-11-19 03:16:25.086740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.733 [2024-11-19 03:16:25.086961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.733 [2024-11-19 03:16:25.086991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.733 [2024-11-19 03:16:25.087007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.733 [2024-11-19 03:16:25.087022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.733 [2024-11-19 03:16:25.099610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.733 [2024-11-19 03:16:25.099963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.733 [2024-11-19 03:16:25.099998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.733 [2024-11-19 03:16:25.100014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.733 [2024-11-19 03:16:25.100239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.733 [2024-11-19 03:16:25.100458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.733 [2024-11-19 03:16:25.100479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.733 [2024-11-19 03:16:25.100493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.733 [2024-11-19 03:16:25.100505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.733 [2024-11-19 03:16:25.113076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.733 [2024-11-19 03:16:25.113433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.733 [2024-11-19 03:16:25.113462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.734 [2024-11-19 03:16:25.113478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.734 [2024-11-19 03:16:25.113709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:14.734 [2024-11-19 03:16:25.113929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.734 [2024-11-19 03:16:25.113950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:14.734 [2024-11-19 03:16:25.113971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.734 [2024-11-19 03:16:25.113985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:14.734 [2024-11-19 03:16:25.126561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.734 [2024-11-19 03:16:25.126989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.734 [2024-11-19 03:16:25.127018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.734 [2024-11-19 03:16:25.127045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.734 [2024-11-19 03:16:25.127283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.734 [2024-11-19 03:16:25.127494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.734 [2024-11-19 03:16:25.127515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.734 [2024-11-19 03:16:25.127528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.734 [2024-11-19 03:16:25.127541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:14.734 [2024-11-19 03:16:25.136887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:14.734 [2024-11-19 03:16:25.140087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.734 [2024-11-19 03:16:25.140429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.734 [2024-11-19 03:16:25.140457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.734 [2024-11-19 03:16:25.140473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.734 [2024-11-19 03:16:25.140686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.734 [2024-11-19 03:16:25.140914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.734 [2024-11-19 03:16:25.140935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.734 [2024-11-19 03:16:25.140950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.734 [2024-11-19 03:16:25.140962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:14.734 [2024-11-19 03:16:25.153632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.734 [2024-11-19 03:16:25.154123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.734 [2024-11-19 03:16:25.154158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.734 [2024-11-19 03:16:25.154178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.734 [2024-11-19 03:16:25.154413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.734 [2024-11-19 03:16:25.154628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.734 [2024-11-19 03:16:25.154649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.734 [2024-11-19 03:16:25.154665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.734 [2024-11-19 03:16:25.154717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.734 [2024-11-19 03:16:25.167189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.734 [2024-11-19 03:16:25.167533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.734 [2024-11-19 03:16:25.167561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.734 [2024-11-19 03:16:25.167577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.734 [2024-11-19 03:16:25.167799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.734 [2024-11-19 03:16:25.168047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.734 [2024-11-19 03:16:25.168066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.734 [2024-11-19 03:16:25.168079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.734 [2024-11-19 03:16:25.168091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.734 [2024-11-19 03:16:25.180733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.734 [2024-11-19 03:16:25.181206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.734 [2024-11-19 03:16:25.181238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.734 [2024-11-19 03:16:25.181257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.734 [2024-11-19 03:16:25.181491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.734 [2024-11-19 03:16:25.181732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.734 [2024-11-19 03:16:25.181754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.734 [2024-11-19 03:16:25.181769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.734 [2024-11-19 03:16:25.181784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.734 Malloc0 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:14.734 [2024-11-19 03:16:25.194287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.734 [2024-11-19 03:16:25.194635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.734 [2024-11-19 03:16:25.194662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97acf0 with addr=10.0.0.2, port=4420 00:35:14.734 [2024-11-19 03:16:25.194679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97acf0 is same with the state(6) to be set 00:35:14.734 [2024-11-19 03:16:25.194900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97acf0 (9): Bad file descriptor 00:35:14.734 [2024-11-19 03:16:25.195131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.734 [2024-11-19 03:16:25.195151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.734 [2024-11-19 03:16:25.195165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.734 [2024-11-19 03:16:25.195177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:14.734 [2024-11-19 03:16:25.203888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.734 [2024-11-19 03:16:25.207907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.734 03:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 400135 00:35:14.734 [2024-11-19 03:16:25.325171] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:16.114 3615.33 IOPS, 14.12 MiB/s [2024-11-19T02:16:27.666Z] 4325.29 IOPS, 16.90 MiB/s [2024-11-19T02:16:28.603Z] 4847.50 IOPS, 18.94 MiB/s [2024-11-19T02:16:29.538Z] 5263.33 IOPS, 20.56 MiB/s [2024-11-19T02:16:30.477Z] 5588.10 IOPS, 21.83 MiB/s [2024-11-19T02:16:31.411Z] 5848.27 IOPS, 22.84 MiB/s [2024-11-19T02:16:32.791Z] 6068.92 IOPS, 23.71 MiB/s [2024-11-19T02:16:33.726Z] 6256.92 IOPS, 24.44 MiB/s [2024-11-19T02:16:34.664Z] 6428.21 IOPS, 25.11 MiB/s [2024-11-19T02:16:34.664Z] 6565.93 IOPS, 25.65 MiB/s 00:35:24.049 Latency(us) 00:35:24.049 [2024-11-19T02:16:34.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:24.049 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:24.049 Verification LBA range: start 0x0 length 0x4000 00:35:24.049 Nvme1n1 : 15.01 6568.73 25.66 10345.43 0.00 7544.93 831.34 17670.45 00:35:24.049 [2024-11-19T02:16:34.664Z] =================================================================================================================== 00:35:24.049 [2024-11-19T02:16:34.664Z] Total : 6568.73 25.66 10345.43 0.00 7544.93 831.34 17670.45 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:24.049 rmmod nvme_tcp 00:35:24.049 rmmod nvme_fabrics 00:35:24.049 rmmod nvme_keyring 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 400812 ']' 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 400812 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 400812 ']' 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 400812 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:24.049 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 400812 00:35:24.308 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:24.308 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:24.308 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 400812' 00:35:24.308 killing process with pid 400812 00:35:24.308 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 400812 00:35:24.308 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 400812 00:35:24.308 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:24.308 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:24.308 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:24.308 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:24.308 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:35:24.308 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:35:24.308 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:24.308 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:24.308 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:24.308 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.308 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:24.308 03:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.841 03:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:26.841 00:35:26.841 real 0m22.600s 00:35:26.841 user 1m0.600s 00:35:26.841 sys 0m4.118s 00:35:26.841 03:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:26.841 03:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:26.841 ************************************ 00:35:26.841 END TEST nvmf_bdevperf 00:35:26.841 ************************************ 00:35:26.842 03:16:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:26.842 03:16:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:26.842 03:16:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:26.842 03:16:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.842 ************************************ 00:35:26.842 START TEST nvmf_target_disconnect 00:35:26.842 ************************************ 00:35:26.842 03:16:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:26.842 * Looking for test storage... 00:35:26.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:26.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.842 --rc genhtml_branch_coverage=1 00:35:26.842 --rc genhtml_function_coverage=1 00:35:26.842 --rc genhtml_legend=1 00:35:26.842 --rc geninfo_all_blocks=1 00:35:26.842 --rc geninfo_unexecuted_blocks=1 00:35:26.842 00:35:26.842 ' 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:26.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.842 --rc genhtml_branch_coverage=1 00:35:26.842 --rc genhtml_function_coverage=1 00:35:26.842 --rc genhtml_legend=1 00:35:26.842 --rc geninfo_all_blocks=1 00:35:26.842 --rc geninfo_unexecuted_blocks=1 00:35:26.842 00:35:26.842 ' 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:26.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.842 --rc genhtml_branch_coverage=1 00:35:26.842 --rc genhtml_function_coverage=1 00:35:26.842 --rc genhtml_legend=1 00:35:26.842 --rc geninfo_all_blocks=1 00:35:26.842 --rc geninfo_unexecuted_blocks=1 00:35:26.842 00:35:26.842 ' 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:26.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.842 --rc genhtml_branch_coverage=1 00:35:26.842 --rc genhtml_function_coverage=1 00:35:26.842 --rc genhtml_legend=1 00:35:26.842 --rc geninfo_all_blocks=1 00:35:26.842 --rc geninfo_unexecuted_blocks=1 00:35:26.842 00:35:26.842 ' 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:26.842 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:26.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:26.843 03:16:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:28.748 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:28.748 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:28.748 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:28.749 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:28.749 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:28.749 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:29.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:29.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:35:29.008 00:35:29.008 --- 10.0.0.2 ping statistics --- 00:35:29.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.008 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:29.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:29.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:35:29.008 00:35:29.008 --- 10.0.0.1 ping statistics --- 00:35:29.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.008 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:29.008 ************************************ 00:35:29.008 START TEST nvmf_target_disconnect_tc1 00:35:29.008 ************************************ 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:29.008 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:29.009 [2024-11-19 03:16:39.535486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.009 [2024-11-19 03:16:39.535575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x699a90 with addr=10.0.0.2, port=4420 00:35:29.009 [2024-11-19 03:16:39.535615] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:29.009 [2024-11-19 03:16:39.535637] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:29.009 [2024-11-19 03:16:39.535651] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:29.009 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:29.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:29.009 Initializing NVMe Controllers 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:29.009 00:35:29.009 real 0m0.100s 00:35:29.009 user 0m0.047s 00:35:29.009 sys 0m0.052s 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:29.009 ************************************ 00:35:29.009 END TEST nvmf_target_disconnect_tc1 00:35:29.009 ************************************ 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:29.009 ************************************ 00:35:29.009 START TEST nvmf_target_disconnect_tc2 00:35:29.009 ************************************ 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=403963 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 403963 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 403963 ']' 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:29.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:29.009 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:29.267 [2024-11-19 03:16:39.654172] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:29.267 [2024-11-19 03:16:39.654246] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:29.267 [2024-11-19 03:16:39.729137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:29.267 [2024-11-19 03:16:39.777345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:29.267 [2024-11-19 03:16:39.777402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:29.267 [2024-11-19 03:16:39.777426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:29.267 [2024-11-19 03:16:39.777437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:29.267 [2024-11-19 03:16:39.777447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:29.267 [2024-11-19 03:16:39.778989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:29.267 [2024-11-19 03:16:39.779032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:29.267 [2024-11-19 03:16:39.779107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:29.267 [2024-11-19 03:16:39.779110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:29.526 Malloc0 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:29.526 [2024-11-19 03:16:39.967801] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:29.526 [2024-11-19 03:16:39.996088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.526 03:16:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:29.526 03:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.526 03:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=404113 00:35:29.526 03:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:29.526 03:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:31.436 03:16:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 403963 00:35:31.436 03:16:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 [2024-11-19 03:16:42.021235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 [2024-11-19 03:16:42.021543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Read completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 Write completed with error (sct=0, sc=8) 00:35:31.436 starting I/O failed 00:35:31.436 [2024-11-19 03:16:42.021891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:31.436 [2024-11-19 03:16:42.022146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.436 [2024-11-19 03:16:42.022186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.436 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.022289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.022317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.022476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.022502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Write completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Write completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Write completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Write completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Write completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Write completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Write completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Write completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Write completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Write completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Read completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 Write completed with error (sct=0, sc=8) 00:35:31.437 starting I/O failed 00:35:31.437 [2024-11-19 03:16:42.022826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:31.437 [2024-11-19 03:16:42.022921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.022969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.023110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.023143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.023289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.023316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.023494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.023546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.023660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.023696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.023809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.023835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.023917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.023943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.024096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.024122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.024229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.024254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.024361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.024387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.024465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.024490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.024587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.024613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.024741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.024789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.024898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.024940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.025116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.025156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.025317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.025346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.025455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.025482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.025601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.025628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.025746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.025775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.025860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.025885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.025974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.026001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.026102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.026128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.026240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.026266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.437 [2024-11-19 03:16:42.026343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.437 [2024-11-19 03:16:42.026368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.437 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.026443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.026470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.026587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.026619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.026735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.026767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.026867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.026908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.027032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.027061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.027230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.027257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.027377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.027404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.027522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.027548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.027659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.027696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.027794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.027819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.027908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.027935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.028065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.028091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.028219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.028248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.028335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.028363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.028597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.028665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.028778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.028805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.028900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.028926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.029016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.029052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.029173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.029198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.029346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.029375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.029500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.029526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.029669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.029714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.029802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.029829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.029920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.029947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.030058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.030085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.030164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.030205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.030315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.030355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.030492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.030532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.030635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.030670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.030771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.030799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.030926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.030953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.031054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.031081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.031266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.031294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.031414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.031445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.031569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.031598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.031683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.031716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.031854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.031879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.438 [2024-11-19 03:16:42.032091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-11-19 03:16:42.032151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.438 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.032336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.032363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.032479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.032507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.032629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.032658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.032783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.032823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.032953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.032982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.033093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.033120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.033239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.033265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.033383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.033410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.033498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.033523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.033617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.033645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.033751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.033779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.033922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.033949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.034079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.034107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.034194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.034222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.034338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.034364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.034505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.034532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.034662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.034704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.034824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.034856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.034995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.035021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.035163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.035190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.035459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.035497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.035662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.035703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.035903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.035931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.036145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.036171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.036326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.036352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.036562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.036589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.036743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.036771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.036861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.036889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.036983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.037014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.037101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.037128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.037238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.037265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.037388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.037414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.037503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.037531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.037657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.037711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.037843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.037882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.038021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.038049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.038170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-11-19 03:16:42.038197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.439 qpair failed and we were unable to recover it. 00:35:31.439 [2024-11-19 03:16:42.038315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.038342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.038424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.038452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.038559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.038586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.038710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.038741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.038895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.038923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.039063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.039090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.039207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.039233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.039336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.039363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.039500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.039527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.039611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.039637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.039778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.039805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.039921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.039948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.040102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.040128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.040246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.040272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.040386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.040411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.040532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.040564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.040687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.040733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.040867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.040896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.041014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.041041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.041162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.041189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.041301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.041337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.041425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.041453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.041565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.041591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.041706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.041733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.041819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.041845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.041960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.041993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.042080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.042107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.042261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.042287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.042399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.042424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.042570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.042596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.042795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.042825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.440 qpair failed and we were unable to recover it. 00:35:31.440 [2024-11-19 03:16:42.042940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-11-19 03:16:42.042967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.043089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.043117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.043239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.043266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.043463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.043490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.043614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.043647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.043756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.043784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.043923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.043949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.044039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.044065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.044203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.044229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.044410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.044479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.044595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.044623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.044728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.044756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.044882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.044923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.045051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.045079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.045259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.045313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.045473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.045532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.045698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.045748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.045870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.045898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.046024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.046055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.046165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.046191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.046385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.046461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.046587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.046616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.046769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.046796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.046937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.046963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.047157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.047185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.047336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.047363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.047481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.047507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.047627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.047654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.047799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.047840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.047998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.048028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.048148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.048175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.048258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.048284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.048370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.048409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.048528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.048554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.048648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.048676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.048831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.048858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.048970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.049002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.441 [2024-11-19 03:16:42.049123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.441 [2024-11-19 03:16:42.049152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.441 qpair failed and we were unable to recover it. 00:35:31.442 [2024-11-19 03:16:42.049250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.442 [2024-11-19 03:16:42.049292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.442 qpair failed and we were unable to recover it. 00:35:31.442 [2024-11-19 03:16:42.049496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.442 [2024-11-19 03:16:42.049536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.442 qpair failed and we were unable to recover it. 00:35:31.442 [2024-11-19 03:16:42.049626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.442 [2024-11-19 03:16:42.049655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.442 qpair failed and we were unable to recover it. 00:35:31.442 [2024-11-19 03:16:42.049792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.442 [2024-11-19 03:16:42.049820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.442 qpair failed and we were unable to recover it. 00:35:31.442 [2024-11-19 03:16:42.049915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.442 [2024-11-19 03:16:42.049941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.442 qpair failed and we were unable to recover it. 00:35:31.442 [2024-11-19 03:16:42.050056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.442 [2024-11-19 03:16:42.050089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.442 qpair failed and we were unable to recover it. 00:35:31.442 [2024-11-19 03:16:42.050179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.442 [2024-11-19 03:16:42.050205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.442 qpair failed and we were unable to recover it. 00:35:31.442 [2024-11-19 03:16:42.050361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.442 [2024-11-19 03:16:42.050423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.442 qpair failed and we were unable to recover it. 00:35:31.442 [2024-11-19 03:16:42.050567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.442 [2024-11-19 03:16:42.050595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.442 qpair failed and we were unable to recover it. 00:35:31.442 [2024-11-19 03:16:42.050715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.442 [2024-11-19 03:16:42.050756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.442 qpair failed and we were unable to recover it. 00:35:31.733 [2024-11-19 03:16:42.050903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.733 [2024-11-19 03:16:42.050932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.733 qpair failed and we were unable to recover it. 00:35:31.733 [2024-11-19 03:16:42.051057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.733 [2024-11-19 03:16:42.051085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.733 qpair failed and we were unable to recover it. 00:35:31.733 [2024-11-19 03:16:42.051214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.051241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.051399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.051453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.051539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.051566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.051711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.051739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.051879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.051906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.052026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.052053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.052175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.052202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.052380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.052431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.052521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.052550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.052664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.052707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.052830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.052857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.052965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.052997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.053114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.053141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.053334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.053361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.053475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.053504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.053616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.053642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.053760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.053787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.053896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.053923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.054088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.054128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.054252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.054281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.054400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.054428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.054523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.054549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.054644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.054703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.054854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.054883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.055005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.055033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.055177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.055204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.055290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.055318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.055414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.055442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.055588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.055616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.055813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.055841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.055934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.055960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.056147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.056174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.056256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.056283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.056399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.056431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.056530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.056559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.056703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.056732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.056822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.734 [2024-11-19 03:16:42.056850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.734 qpair failed and we were unable to recover it. 00:35:31.734 [2024-11-19 03:16:42.056934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.056961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.057039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.057067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.057213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.057239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.057382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.057409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.057484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.057511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.057629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.057658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.057822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.057851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.057942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.057968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.058084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.058110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.058195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.058221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.058389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.058416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.058561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.058587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.058705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.058745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.058898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.058927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.059047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.059075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.059188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.059215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.059339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.059366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.059521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.059561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.059681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.059718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.059834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.059862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.060056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.060083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.060168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.060196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.060373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.060399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.060518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.060545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.060623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.060653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.060811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.060851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.060941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.060970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.061086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.061114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.061230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.061257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.061353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.061384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.061500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.061526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.061616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.061644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.061776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.061819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.061939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.061967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.062128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.062156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.062271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.062298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.062412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.062453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.735 [2024-11-19 03:16:42.062546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.735 [2024-11-19 03:16:42.062572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.735 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.062658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.062685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.062831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.062858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.062975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.063005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.063158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.063185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.063292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.063318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.063449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.063490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.063614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.063643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.063785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.063825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.063974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.064004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.064145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.064172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.064292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.064318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.064406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.064433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.064565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.064606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.064731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.064762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.064883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.064911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.065027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.065053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.065229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.065256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.065338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.065365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.065457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.065485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.065561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.065587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.065705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.065732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.065826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.065852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.065930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.065956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.066098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.066124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.066250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.066277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.066417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.066448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.066606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.066647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.066815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.066855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.066987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.067030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.067120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.067148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.067266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.067292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.067402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.067429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.067517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.067544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.067660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.067686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.067785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.067811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.067888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.067916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.068026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.068053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.736 qpair failed and we were unable to recover it. 00:35:31.736 [2024-11-19 03:16:42.068173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.736 [2024-11-19 03:16:42.068204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.068320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.068348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.068507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.068538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.068682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.068720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.068806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.068834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.068976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.069002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.069087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.069113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.069195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.069223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.069435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.069462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.069574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.069602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.069719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.069748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.069832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.069860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.070010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.070063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.070219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.070275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.070503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.070530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.070726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.070756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.070907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.070933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.071020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.071046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.071120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.071146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.071265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.071292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.071381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.071410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.071533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.071562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.071706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.071746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.071842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.071870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.071988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.072049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.072134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.072160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.072302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.072329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.072439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.072467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.072583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.072610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.072733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.072761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.072879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.072906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.073057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.073086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.073218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.073245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.073328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.073357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.073475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.073502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.073640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.073666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.073761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.073788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.073875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.737 [2024-11-19 03:16:42.073901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.737 qpair failed and we were unable to recover it. 00:35:31.737 [2024-11-19 03:16:42.074043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.074069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.074151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.074177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.074294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.074323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.074407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.074434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.074551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.074580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.074674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.074712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.074811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.074850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.074945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.074984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.075198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.075226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.075308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.075334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.075452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.075479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.075554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.075581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.075661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.075701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.075846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.075873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.075993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.076020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.076135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.076162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.076281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.076310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.076410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.076443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.076559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.076588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.076666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.076708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.076860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.076887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.077002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.077029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.077140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.077166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.077387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.077415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.077525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.077552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.077638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.077666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.077793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.077820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.077897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.077924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.078047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.078073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.078179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.078206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.078329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.738 [2024-11-19 03:16:42.078356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.738 qpair failed and we were unable to recover it. 00:35:31.738 [2024-11-19 03:16:42.078475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.078504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.078618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.078647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.078768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.078796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.078940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.078967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.079052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.079079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.079167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.079194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.079384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.079412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.079521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.079549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.079674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.079725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.079845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.079874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.079957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.079987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.080128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.080155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.080271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.080297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.080475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.080529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.080649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.080698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.080815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.080842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.080959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.080988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.081079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.081106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.081318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.081346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.081485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.081511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.081601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.081642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.081795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.081823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.081949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.081987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.082135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.082162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.082295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.082350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.082565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.082604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.082694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.082722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.082803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.082830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.082906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.082932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.083071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.083098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.083210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.083238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.083322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.083350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.083492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.083518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.083625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.083652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.083757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.083786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.083882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.083910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.084120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.084179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.739 [2024-11-19 03:16:42.084257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.739 [2024-11-19 03:16:42.084284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.739 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.084368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.084395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.084515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.084542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.084634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.084663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.084867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.084896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.085109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.085163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.085250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.085277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.085456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.085505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.085621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.085649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.085771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.085798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.086023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.086050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.086230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.086281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.086372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.086399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.086521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.086549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.086698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.086727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.086846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.086874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.087018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.087049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.087180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.087207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.087404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.087431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.087575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.087602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.087717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.087745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.087856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.087882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.088000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.088029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.088206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.088257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.088449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.088477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.088594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.088620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.088753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.088793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.088887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.088916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.089007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.089034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.089194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.089245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.089424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.089451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.089598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.089625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.089746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.089774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.089863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.089889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.090003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.090029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.090112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.090139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.090214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.090241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.090381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.740 [2024-11-19 03:16:42.090408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.740 qpair failed and we were unable to recover it. 00:35:31.740 [2024-11-19 03:16:42.090518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.090544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.090632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.090659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.090755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.090785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.090864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.090891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.091006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.091033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.091115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.091142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.091247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.091273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.091405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.091444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.091537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.091565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.091659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.091686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.091815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.091842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.091926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.091953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.092074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.092104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.092248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.092275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.092416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.092442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.092584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.092619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.092729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.092758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.092851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.092878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.092990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.093022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.093102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.093130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.093241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.093269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.093384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.093412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.093609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.093636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.093745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.093785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.093930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.093957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.094036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.094062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.094156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.094183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.094277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.094303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.094393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.094420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.094559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.094585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.094698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.094726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.094807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.094835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.094966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.095006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.095151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.095179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.095288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.095315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.095423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.095450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.095594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.095623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.741 [2024-11-19 03:16:42.095705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.741 [2024-11-19 03:16:42.095731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.741 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.095873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.095900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.096057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.096084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.096213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.096275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.096426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.096453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.096567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.096595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.096739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.096779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.096929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.096958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.097137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.097170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.097281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.097307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.097397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.097422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.097534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.097559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.097704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.097745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.097847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.097876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.098004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.098031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.098155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.098210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.098383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.098437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.098547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.098575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.098656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.098701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.098781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.098812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.098924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.098953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.099109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.099149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.099277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.099306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.099390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.099415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.099524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.099551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.099628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.099655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.099777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.099803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.099915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.099941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.100059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.100085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.100193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.100219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.100338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.100366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.100451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.100477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.100564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.100591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.100664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.100709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.100817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.100844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.100993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.101021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.101116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.101144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.101234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.101260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.742 qpair failed and we were unable to recover it. 00:35:31.742 [2024-11-19 03:16:42.101339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.742 [2024-11-19 03:16:42.101366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.101481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.101507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.101615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.101641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.101764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.101793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.101889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.101917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.102078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.102118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.102244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.102273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.102384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.102412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.102529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.102556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.102669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.102714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.102868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.102895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.103024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.103061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.103181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.103208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.103291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.103319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.103429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.103456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.103568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.103595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.103707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.103735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.103854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.103882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.104018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.104058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.104200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.104228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.104321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.104348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.104426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.104452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.104613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.104653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.104759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.104787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.104934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.104962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.105043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.105071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.105190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.105217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.105325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.105352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.105433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.105459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.105571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.105597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.105717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.105746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.105877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.105916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.106031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.106058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.743 qpair failed and we were unable to recover it. 00:35:31.743 [2024-11-19 03:16:42.106157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.743 [2024-11-19 03:16:42.106183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.106269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.106294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.106381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.106420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.106517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.106545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.106661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.106712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.106802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.106828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.106918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.106946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.107059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.107086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.107206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.107233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.107348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.107378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.107493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.107520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.107668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.107712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.107834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.107861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.107972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.107999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.108190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.108217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.108309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.108337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.108570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.108627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.108744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.108772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.108917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.108944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.109029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.109056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.109217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.109270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.109364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.109391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.109514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.109541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.109619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.109646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.109768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.109795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.109908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.109935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.110052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.110079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.110192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.110219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.110335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.110362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.110479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.110506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.110625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.110653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.110785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.110825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.110945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.110985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.111103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.111132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.111280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.111307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.111451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.111478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.111566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.111595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.111729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.744 [2024-11-19 03:16:42.111769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.744 qpair failed and we were unable to recover it. 00:35:31.744 [2024-11-19 03:16:42.111864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.111893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.112015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.112042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.112134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.112161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.112302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.112329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.112433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.112460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.112545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.112573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.112672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.112737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.112844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.112872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.112966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.113004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.113113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.113140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.113255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.113283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.113402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.113429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.113547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.113575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.113671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.113723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.113850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.113878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.113966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.113995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.114072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.114098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.114215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.114242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.114334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.114363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.114478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.114505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.114628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.114656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.114866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.114894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.114977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.115004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.115190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.115218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.115360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.115386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.115522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.115549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.115666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.115717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.115831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.115858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.115968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.116004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.116077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.116103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.116218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.116243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.116359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.116384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.116467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.116492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.116584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.745 [2024-11-19 03:16:42.116618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.745 qpair failed and we were unable to recover it. 00:35:31.745 [2024-11-19 03:16:42.116738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.116778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.116902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.116931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.117068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.117095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.117225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.117252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.117335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.117362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.117479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.117507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.117600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.117629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.117764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.117793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.117908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.117935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.118020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.118047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.118213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.118266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.118459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.118485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.118597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.118624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.118776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.118804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.118936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.118964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.119107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.119133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.119243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.119270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.119353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.119379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.119464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.119491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.119630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.119656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.119798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.119824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.119915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.119941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.120081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.120107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.120188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.120216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.120330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.120357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.120435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.120461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.120577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.120605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.120749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.120789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.120895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.120935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.121058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.121086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.121256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.121308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.121395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.121422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.121535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.121561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.746 [2024-11-19 03:16:42.121642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.746 [2024-11-19 03:16:42.121669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.746 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.121814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.121840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.121954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.121989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.122071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.122097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.122221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.122261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.122362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.122393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.122520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.122557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.122642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.122670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.122794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.122821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.122905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.122931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.123021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.123048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.123189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.123216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.123312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.123340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.123456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.123484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.123596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.123622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.123715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.123741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.123854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.123880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.123960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.123989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.124131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.124157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.124295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.124320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.124465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.124492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.124600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.124626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.124753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.124781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.124901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.124928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.125057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.125085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.125190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.125256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.125396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.125423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.125530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.125557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.125711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.125739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.125859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.125886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.126018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.126058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.126226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.126280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.126469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.126514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.747 [2024-11-19 03:16:42.126640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.747 [2024-11-19 03:16:42.126667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.747 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.126778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.126818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.126940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.126969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.127109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.127135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.127250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.127277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.127456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.127506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.127662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.127722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.127825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.127855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.127978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.128006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.128121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.128149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.128309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.128364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.128476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.128503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.128598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.128624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.128735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.128767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.128911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.128938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.129059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.129085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.129258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.129320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.129437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.129464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.129577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.129603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.129725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.129754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.129841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.129868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.129995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.130025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.130118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.130144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.130385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.130452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.130573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.130601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.130699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.130727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.130816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.130843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.130942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.130982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.748 [2024-11-19 03:16:42.131076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.748 [2024-11-19 03:16:42.131103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.748 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.131221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.131248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.131357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.131384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.131469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.131498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.131583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.131611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.131732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.131760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.131872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.131898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.132018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.132045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.132185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.132210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.132349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.132375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.132492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.132520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.132651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.132708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.132832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.132861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.132981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.133008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.133122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.133157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.133274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.133302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.133384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.133412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.133527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.133567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.133723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.133751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.133871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.133898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.134012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.134038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.134122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.134147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.134266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.134294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.134389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.134417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.134531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.134558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.134706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.134739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.134852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.134880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.135003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.135030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.135189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.135239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.135354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.135382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.135499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.135526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.135638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.135664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.135780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.135807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.749 [2024-11-19 03:16:42.135907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.749 [2024-11-19 03:16:42.135947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.749 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.136081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.136109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.136226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.136255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.136366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.136393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.136505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.136531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.136682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.136716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.136814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.136842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.136961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.137048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.137190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.137217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.137415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.137443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.137535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.137561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.137673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.137710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.137823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.137850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.137998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.138023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.138207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.138262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.138443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.138470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.138621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.138646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.138769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.138797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.138878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.138904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.138992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.139027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.139115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.139143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.139291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.139317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.139411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.139438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.139520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.139546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.139662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.139708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.139826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.139852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.139970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.140004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.140148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.140174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.140285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.140312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.140450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.140476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.140574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.140600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.140725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.140752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.140871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.140897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.140999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.750 [2024-11-19 03:16:42.141026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.750 qpair failed and we were unable to recover it. 00:35:31.750 [2024-11-19 03:16:42.141162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.141188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.141333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.141359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.141482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.141522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.141672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.141713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.141843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.141870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.142010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.142036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.142148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.142174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.142317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.142342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.142538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.142566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.142679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.142712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.142831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.142858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.142973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.143010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.143198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.143254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.143340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.143366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.144425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.144457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.144597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.144626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.144760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.144788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.144908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.144934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.145062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.145090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.145173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.145199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.145310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.145337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.145480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.145506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.145594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.145621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.145741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.145782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.145921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.145961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.146115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.146149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.146236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.146264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.146379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.146406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.146497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.146523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.146613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.146640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.146763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.146791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.147740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.147773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.147873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.147900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.751 qpair failed and we were unable to recover it. 00:35:31.751 [2024-11-19 03:16:42.148021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.751 [2024-11-19 03:16:42.148048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.148159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.148186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.148301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.148328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.148449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.148477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.148593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.148621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.148753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.148792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.148915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.148943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.149069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.149095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.149232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.149259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.149373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.149399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.149511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.149538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.149653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.149698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.149811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.149837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.149954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.149991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.150077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.150103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.150183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.150235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.150375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.150401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.150532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.150566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.150714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.150740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.150863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.150890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.151049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.151076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.151187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.151213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.151308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.151335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.151418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.151447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.151544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.151570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.151651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.151697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.151844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.151872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.151959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.151989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.152110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.152138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.152284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.152312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.152454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.152483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.152618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.152644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.152771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.152804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.752 [2024-11-19 03:16:42.152914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.752 [2024-11-19 03:16:42.152941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.752 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.153019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.153055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.153302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.153360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.153478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.153505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.153622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.153648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.153775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.153803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.153898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.153925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.154050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.154076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.154207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.154235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.154428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.154470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.154594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.154621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.154729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.154757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.154867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.154894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.155018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.155045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.155185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.155213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.155364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.155413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.155530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.155559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.155675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.155712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.155838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.155865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.155979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.156011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.156146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.156172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.156265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.156291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.156428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.156454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.156551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.156579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.156661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.156714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.156841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.156868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.157016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.157042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.157168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.753 [2024-11-19 03:16:42.157194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.753 qpair failed and we were unable to recover it. 00:35:31.753 [2024-11-19 03:16:42.157340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.157377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.157499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.157526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.157642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.157669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.157773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.157800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.157913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.157939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.158043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.158111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.158239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.158271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.158381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.158409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.158524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.158552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.158671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.158718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.158830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.158857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.158971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.159004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.159134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.159161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.159303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.159330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.159424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.159452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.159574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.159602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.159753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.159782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.159933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.159963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.160049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.160075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.160215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.160242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.160324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.160351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.160437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.160463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.160586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.160625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.160768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.160798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.160914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.160942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.161146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.161215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.161308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.161334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.161449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.161475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.161597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.161624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.161750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.161777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.161889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.161915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.162000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.162027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.162149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.754 [2024-11-19 03:16:42.162176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.754 qpair failed and we were unable to recover it. 00:35:31.754 [2024-11-19 03:16:42.162263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.162289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.162376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.162402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.163133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.163167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.163300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.163328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.163470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.163497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.163623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.163649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.163795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.163837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.163988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.164017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.164131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.164158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.164347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.164410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.164526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.164554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.164673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.164718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.164836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.164863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.165008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.165035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.165144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.165171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.165286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.165313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.165392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.165419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.165503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.165529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.165680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.165720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.165841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.165868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.166007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.166033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.166122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.166149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.166274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.166300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.166388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.166415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.166528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.166566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.166648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.166675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.166801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.166828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.166921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.166948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.167055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.167121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.167274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.167301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.167421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.167448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.167574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.167614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.167729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.167757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.167850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.755 [2024-11-19 03:16:42.167876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.755 qpair failed and we were unable to recover it. 00:35:31.755 [2024-11-19 03:16:42.168058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.168113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.168196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.168223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.168417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.168445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.168557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.168583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.168711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.168740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.168820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.168847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.169002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.169029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.169124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.169190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.169334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.169368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.169515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.169545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.169666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.169704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.169859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.169889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.170042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.170068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.170169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.170196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.170308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.170336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.170430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.170457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.170573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.170598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.170711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.170738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.170849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.170875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.170986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.171013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.171154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.171180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.171301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.171327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.171409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.171434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.171557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.171597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.171764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.171798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.171884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.171911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.172064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.172090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.172176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.172202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.172313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.172349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.172436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.172463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.172574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.172601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.172697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.172725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.172835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.756 [2024-11-19 03:16:42.172862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.756 qpair failed and we were unable to recover it. 00:35:31.756 [2024-11-19 03:16:42.173007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.173033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.173148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.173182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.173310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.173339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.173461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.173491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.173582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.173610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.173750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.173779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.173863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.173890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.174017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.174054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.174154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.174180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.174300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.174326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.174412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.174439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.174586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.174614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.174741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.174768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.174880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.174907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.175065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.175092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.175277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.175304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.175417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.175445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.175563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.175592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.175722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.175750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.175869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.175896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.175998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.176025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.176130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.176157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.176298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.176325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.176439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.176466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.176588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.176614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.176733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.176760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.176882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.176909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.176999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.177027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.177138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.177165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.177260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.177288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.177368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.177395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.177482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.177513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.177602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.177630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.757 [2024-11-19 03:16:42.177761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.757 [2024-11-19 03:16:42.177788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.757 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.177874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.177900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.178026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.178059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.178212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.178239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.178359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.178386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.178468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.178494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.178605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.178632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.178767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.178795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.178889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.178916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.179012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.179039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.179157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.179184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.179271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.179298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.179400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.179438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.179562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.179590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.179721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.179749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.179832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.179858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.179944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.179970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.180101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.180127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.180242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.180269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.180348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.180375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.180468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.180494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.180575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.180601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.180719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.180747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.180837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.180864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.180943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.180969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.181098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.181125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.181219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.181246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.181362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.181388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.181498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.758 [2024-11-19 03:16:42.181524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.758 qpair failed and we were unable to recover it. 00:35:31.758 [2024-11-19 03:16:42.181604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.181630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.181725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.181751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.181845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.181873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.182014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.182040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.182163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.182189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.182292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.182317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.182456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.182482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.182563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.182589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.182672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.182710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.182802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.182832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.182921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.182947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.183038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.183065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.183221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.183248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.183364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.183390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.183507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.183533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.183654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.183697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.183786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.183813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.183908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.183935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.184083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.184130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.184239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.184265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.184345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.184370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.184460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.184487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.184570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.184596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.184721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.184749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.184841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.184867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.184977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.185010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.185095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.185121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.185237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.185264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.185348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.185375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.185494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.185533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.185631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.185660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.185767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.185795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.759 [2024-11-19 03:16:42.185885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.759 [2024-11-19 03:16:42.185912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.759 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.186023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.186059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.186169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.186195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.186332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.186358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.186484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.186524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.186678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.186721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.186807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.186834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.186921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.186948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.187129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.187180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.187316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.187372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.187483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.187509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.187596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.187622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.187740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.187767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.187857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.187883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.187981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.188008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.188166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.188192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.188304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.188330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.188419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.188451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.188534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.188560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.188673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.188713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.188798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.188825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.188912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.188939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.189063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.189096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.189219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.189246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.189360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.189386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.189525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.189552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.189640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.189665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.189769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.189796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.189873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.189899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.189972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.190001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.190090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.190116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.190212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.190239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.760 qpair failed and we were unable to recover it. 00:35:31.760 [2024-11-19 03:16:42.190349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.760 [2024-11-19 03:16:42.190375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.190488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.190515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.190667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.190712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.190816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.190845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.190941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.190973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.191095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.191122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.191232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.191268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.191376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.191404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.191510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.191537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.191623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.191650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.191733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.191761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.191846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.191873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.191968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.192007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.192164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.192204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.192358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.192397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.192484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.192510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.192623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.192649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.192751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.192778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.192862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.192888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.193001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.193034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.193171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.193227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.193397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.193447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.193581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.193608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.193705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.193733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.193820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.193847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.193940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.193967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.194102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.194129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.194245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.194271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.194405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.194434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.194513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.194540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.194663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.194707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.194796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.194823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.194931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.194957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.761 [2024-11-19 03:16:42.195130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.761 [2024-11-19 03:16:42.195164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.761 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.195361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.195394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.195508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.195534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.195624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.195651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.195750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.195778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.195875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.195902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.196032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.196080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.196258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.196292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.196427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.196453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.196571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.196598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.196683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.196719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.196811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.196839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.196935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.196961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.197085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.197111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.197205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.197232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.197345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.197371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.197450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.197477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.197558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.197584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.197662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.197700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.197780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.197811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.197896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.197923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.198036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.198062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.198135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.198161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.198258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.198286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.198433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.198495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.198586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.198614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.198719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.198747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.198838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.198865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.198954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.198990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.199079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.199108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.199234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.199261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.199350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.762 [2024-11-19 03:16:42.199377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.762 qpair failed and we were unable to recover it. 00:35:31.762 [2024-11-19 03:16:42.199467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.199495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.199601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.199641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.199749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.199777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.199873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.199902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.200029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.200066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.200226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.200260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.200429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.200463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.200573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.200599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.200718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.200745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.200840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.200865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.200949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.200986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.201105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.201139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.201264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.201308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.201423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.201470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.201569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.201608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.201724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.201751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.201847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.201873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.201962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.201995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.202115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.202141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.202256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.202307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.202426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.202474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.202613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.202639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.202786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1950970 is same with the state(6) to be set 00:35:31.763 [2024-11-19 03:16:42.202922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.202961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.203099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.203126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.203263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.203312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.203401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.203427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.203551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.203591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.203731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.203766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.203866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.203894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.204003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.763 [2024-11-19 03:16:42.204037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.763 qpair failed and we were unable to recover it. 00:35:31.763 [2024-11-19 03:16:42.204181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.204215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.204367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.204401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.204530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.204563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.204715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.204755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.204857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.204885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.205007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.205035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.205138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.205187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.205319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.205366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.205448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.205475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.205589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.205616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.205715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.205746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.205846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.205873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.205958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.205997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.206082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.206107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.206189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.206232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.206366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.206411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.206502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.206528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.206610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.206635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.206754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.206783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.206869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.206896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.207070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.207117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.207259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.207306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.207393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.207421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.207500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.207527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.207605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.207636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.207739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.207767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.207858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.207885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.207974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.208012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.208096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.208121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.208198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.208224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.764 [2024-11-19 03:16:42.208330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.764 [2024-11-19 03:16:42.208356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.764 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.208442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.208471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.208597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.208637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.208742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.208772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.208864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.208892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.208980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.209019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.209118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.209153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.209277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.209304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.209432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.209457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.209561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.209587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.209673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.209711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.209809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.209834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.209934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.209967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.210121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.210148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.210235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.210264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.210359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.210391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.210509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.210536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.210679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.210716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.210805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.210833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.210931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.210965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.211099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.211126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.211214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.211242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.211369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.211396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.211480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.211505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.211595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.211621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.211717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.211743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.211834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.211885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.212037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.212083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.212250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.212283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.212399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.212427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.212517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.212544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.212660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.212702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.212797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.765 [2024-11-19 03:16:42.212825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.765 qpair failed and we were unable to recover it. 00:35:31.765 [2024-11-19 03:16:42.212906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.212934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.213070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.213107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.213238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.213289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.213407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.213434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.213514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.213541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.213629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.213658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.213766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.213794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.213877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.213905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.214015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.214049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.214166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.214192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.214278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.214307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.214391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.214418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.214522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.214562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.214657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.214702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.214794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.214822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.214918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.214945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.215061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.215087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.215209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.215238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.215327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.215355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.215478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.215509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.215587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.215614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.215714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.215741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.215854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.215886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.216039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.216072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.216223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.216249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.216371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.216404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.216519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.216545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.216661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.216701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.216790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.216821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.216929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.216984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.217079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.217106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.217243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.217290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.766 [2024-11-19 03:16:42.217427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.766 [2024-11-19 03:16:42.217474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.766 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.217565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.217592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.217722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.217752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.217843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.217871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.217974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.218001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.218113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.218140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.218278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.218304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.218417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.218444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.218559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.218600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.218724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.218752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.218867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.218894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.219032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.219071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.219185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.219224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.219325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.219353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.219445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.219472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.219560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.219587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.219666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.219702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.219795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.219821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.219921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.219960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.220083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.220111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.220254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.220281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.220370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.220396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.220511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.220538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.220622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.220650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.220763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.220792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.220893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.220923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.221037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.221071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.221217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.221252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.221365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.221400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.221547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.221575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.221723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.221751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.221833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.221862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.221953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.221988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.222104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.767 [2024-11-19 03:16:42.222131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.767 qpair failed and we were unable to recover it. 00:35:31.767 [2024-11-19 03:16:42.222249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.222277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.222395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.222422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.222512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.222538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.222636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.222662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.222769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.222795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.222871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.222897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.223043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.223087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.223199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.223234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.223374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.223408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.223579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.223607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.223707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.223736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.223830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.223868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.223957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.223996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.224106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.224158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.224360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.224387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.224461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.224488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.224618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.224658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.224767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.224796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.224881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.224907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.225030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.225060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.225210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.225256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.225477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.225514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.225668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.225710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.225805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.225831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.225910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.225936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.226086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.226111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.226322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.226355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.226586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.226615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.226724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.226752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.226843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.226874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.226963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.227000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.227116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.227142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.227225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.227251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.768 [2024-11-19 03:16:42.227334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.768 [2024-11-19 03:16:42.227362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.768 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.227461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.227501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.227595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.227623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.227751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.227779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.227889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.227916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.228048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.228075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.228220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.228246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.228341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.228368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.228481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.228507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.228632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.228658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.228782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.228810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.228909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.228936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.229071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.229097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.229242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.229269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.229418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.229444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.229568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.229594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.229677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.229716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.229810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.229837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.229912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.229938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.230029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.230069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.230180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.230206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.230356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.230391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.230503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.230529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.230666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.230706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.230786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.230812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.230892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.230918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.231022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.231059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.231132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.231159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.769 qpair failed and we were unable to recover it. 00:35:31.769 [2024-11-19 03:16:42.231253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.769 [2024-11-19 03:16:42.231278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.231395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.231421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.231503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.231529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.231611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.231637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.231745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.231771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.231863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.231889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.231970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.232026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.232175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.232207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.232421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.232454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.232622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.232661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.232776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.232804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.232921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.232948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.233052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.233086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.233192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.233225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.233330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.233364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.233460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.233488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.233580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.233606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.233728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.233755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.233838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.233863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.233956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.233993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.234123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.234150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.234247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.234273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.234362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.234394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.234510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.234537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.234628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.234656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.234774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.234815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.234911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.234938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.235062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.235089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.235176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.235202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.235293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.235319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.235434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.235474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.235601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.235629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.235748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.235779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.235867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.770 [2024-11-19 03:16:42.235894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.770 qpair failed and we were unable to recover it. 00:35:31.770 [2024-11-19 03:16:42.235996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.236028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.236141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.236167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.236362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.236394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.236532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.236561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.236707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.236735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.236826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.236856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.236946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.236974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.237129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.237164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.237292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.237340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.237484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.237511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.237603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.237630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.237724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.237752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.237850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.237877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.237973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.238023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.238158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.238187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.238314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.238342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.238432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.238459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.238583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.238612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.238719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.238746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.238839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.238867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.238953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.238990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.239137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.239184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.239327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.239372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.239515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.239541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.239616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.239641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.239743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.239770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.239860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.239887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.239969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.240000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.240091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.240121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.240227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.240277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.240443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.240479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.240614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.240654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.240771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.240799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.771 qpair failed and we were unable to recover it. 00:35:31.771 [2024-11-19 03:16:42.240890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.771 [2024-11-19 03:16:42.240917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.241084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.241132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.241267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.241315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.241419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.241446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.241563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.241590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.241668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.241714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.241810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.241837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.241923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.241952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.242047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.242074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.242202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.242229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.242368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.242395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.242517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.242545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.242637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.242664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.242771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.242800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.242888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.242916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.243034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.243068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.243176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.243203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.243294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.243320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.243465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.243496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.243640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.243676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.243772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.243799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.243891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.243919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.244037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.244069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.244165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.244192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.244332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.244360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.244477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.244505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.244615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.244643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.244728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.244754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.244850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.244877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.245013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.245050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.245177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.245203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.245323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.245350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.245439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.245466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.245610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.772 [2024-11-19 03:16:42.245636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.772 qpair failed and we were unable to recover it. 00:35:31.772 [2024-11-19 03:16:42.245751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.245779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.245867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.245893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.246021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.246060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.246143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.246170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.246318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.246345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.246464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.246491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.246590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.246617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.246730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.246759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.246852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.246879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.246964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.246990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.247070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.247097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.247214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.247253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.247341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.247370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.247466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.247493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.247635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.247662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.247778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.247806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.247885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.247912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.247998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.248026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.248159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.248185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.248301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.248327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.248479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.248506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.248622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.248648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.248751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.248778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.248852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.248879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.248962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.249000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.249156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.249182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.249304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.249331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.249447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.249473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.249588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.249620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.249724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.249751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.249837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.249864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.249955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.249991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.773 [2024-11-19 03:16:42.250112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.773 [2024-11-19 03:16:42.250138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.773 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.250260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.250292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.250403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.250431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.250518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.250545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.250663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.250710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.250829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.250858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.250945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.250971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.251099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.251126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.251217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.251244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.251361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.251388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.251505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.251532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.251684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.251719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.252452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.252483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.252607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.252635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.252762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.252790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.252877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.252904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.253047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.253073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.253193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.253220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.253320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.253360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.253511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.253539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.253677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.253714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.253815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.253843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.253931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.253958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.254082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.254109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.254230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.254257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.254366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.254392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.254507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.254533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.254626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.254652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.254784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.774 [2024-11-19 03:16:42.254811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.774 qpair failed and we were unable to recover it. 00:35:31.774 [2024-11-19 03:16:42.254896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.254922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.255042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.255078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.255191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.255217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.255331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.255358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.255475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.255502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.255614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.255641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.255776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.255802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.255884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.255915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.256039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.256065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.256215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.256242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.256365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.256392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.256466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.256492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.256641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.256668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.256792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.256818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.256917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.256944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.257075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.257115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.257270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.257299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.257414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.257441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.257592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.257620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.257724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.257753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.257849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.257876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.257967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.258001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.258141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.258176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.258332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.258359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.258448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.258475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.258586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.258612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.258731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.258758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.258847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.258874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.258964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.258990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.259130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.259156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.259267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.259294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.259388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.259414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.259541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.259567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.259668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.259726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.259899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.259938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.260039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.775 [2024-11-19 03:16:42.260068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.775 qpair failed and we were unable to recover it. 00:35:31.775 [2024-11-19 03:16:42.260180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.260213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.260325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.260353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.260469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.260495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.260587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.260612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.260705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.260733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.260829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.260854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.260941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.260970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.261150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.261198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.261325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.261376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.261466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.261494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.261609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.261637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.261743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.261777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.261872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.261899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.261980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.262014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.262133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.262160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.262291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.262317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.262436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.262463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.262545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.262572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.262672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.262724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.262825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.262852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.262939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.262966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.263144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.263193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.263310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.263336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.263465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.263496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.263593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.263621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.263766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.263793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.263877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.263904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.264037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.264086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.264238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.264264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.264417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.264444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.264560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.264588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.264668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.264711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.264794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.264821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.264917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.264946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.265062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.265096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.265306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.265339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.776 [2024-11-19 03:16:42.265489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.776 [2024-11-19 03:16:42.265515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.776 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.265632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.265658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.265809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.265853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.265944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.265972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.266718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.266750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.266873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.266902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.267010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.267036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.267147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.267172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.267260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.267297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.267377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.267404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.267486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.267512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.267652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.267696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.267792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.267818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.267928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.267953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.268062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.268088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.268171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.268197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.268321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.268347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.268440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.268465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.268578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.268605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.268727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.268753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.268853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.268887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.268990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.269037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.269132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.269161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.269306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.269333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.269452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.269481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.269600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.269639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.269771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.269800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.269927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.269955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.270115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.270162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.270313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.270359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.270588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.270637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.270738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.270766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.270860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.270887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.271025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.271074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.271166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.271192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.271388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.271416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.271564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.777 [2024-11-19 03:16:42.271599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.777 qpair failed and we were unable to recover it. 00:35:31.777 [2024-11-19 03:16:42.271680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.271718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.271812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.271840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.271920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.271945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.272087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.272137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.272297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.272356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.272468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.272500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.272613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.272640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.272779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.272806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.272900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.272926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.273118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.273178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.273286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.273322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.273481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.273510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.273589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.273615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.273704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.273731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.273818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.273844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.273957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.273985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.274089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.274115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.274201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.274228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.274316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.274344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.274455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.274482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.274592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.274621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.274748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.274775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.274890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.274915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.275002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.275029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.275147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.275173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.275331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.275371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.275464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.275494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.275603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.275631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.275780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.275808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.275927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.275956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.276076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.276107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.276193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.276220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.276303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.276333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.276418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.276445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.276530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.276557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.276657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.276710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.276839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.276867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.276986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.277020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.277097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.778 [2024-11-19 03:16:42.277124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.778 qpair failed and we were unable to recover it. 00:35:31.778 [2024-11-19 03:16:42.277236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.277263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.277380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.277406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.277495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.277523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.277650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.277709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.277806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.277835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.277968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.278013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.278195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.278251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.278432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.278479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.278589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.278616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.278708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.278738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.278873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.278925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.279060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.279117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.279289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.279323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.279493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.279526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.279641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.279670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.279805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.279834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.279967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.280022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.280187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.280248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.280373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.280399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.280486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.280512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.280633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.280660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.280794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.280820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.280909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.280935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.281025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.281053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.281141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.281176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.281287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.281313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.281421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.281446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.281556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.281582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.281703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.281729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.281827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.281856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.281944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.281985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.282128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.282177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.282291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.282346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.282490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.779 [2024-11-19 03:16:42.282518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.779 qpair failed and we were unable to recover it. 00:35:31.779 [2024-11-19 03:16:42.282663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.282715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.282819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.282848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.282940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.282965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.283070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.283141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.283308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.283351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.283589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.283632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.283733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.283760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.283843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.283870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.283950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.283988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.284131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.284165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.284294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.284342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.284482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.284514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.284650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.284675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.284787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.284814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.284904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.284929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.285047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.285076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.285179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.285212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.285405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.285465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.285607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.285642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.285738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.285765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.285870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.285897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.286040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.286084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.286285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.286347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.286482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.286518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.286663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.286707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.286792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.286827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.286944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.286986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.287125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.287181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.287378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.287407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.287551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.287577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.287705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.287731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.287832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.287858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.287949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.287982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.288069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.288095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.288204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.288230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.288337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.288365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.288456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.288483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.288594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.288620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.288716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.288743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.288849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.288896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.780 [2024-11-19 03:16:42.289038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.780 [2024-11-19 03:16:42.289088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.780 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.289177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.289211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.289385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.289437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.289515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.289543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.289653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.289716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.289849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.289876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.290010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.290069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.290210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.290256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.290340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.290366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.290478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.290504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.290615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.290641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.290776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.290804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.290905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.290931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.291051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.291078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.291195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.291222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.291372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.291413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.291506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.291535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.291612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.291639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.291740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.291767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.291877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.291904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.292014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.292041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.292133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.292160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.292314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.292343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.292433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.292461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.292547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.292575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.292683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.292714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.292799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.292829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.292913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.292944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.293021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.293047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.293156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.293185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.293274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.293303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.293429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.293458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.293541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.293568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.293657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.293684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.293828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.293865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.294048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.294095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.294227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.294281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.294399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.294430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.294564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.294592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.294671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.294715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.294846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.294874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.294958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.294990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.295098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.295125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.781 qpair failed and we were unable to recover it. 00:35:31.781 [2024-11-19 03:16:42.295256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.781 [2024-11-19 03:16:42.295303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.295427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.295455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.295568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.295595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.295685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.295729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.295850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.295877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.295973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.296020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.296116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.296145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.296272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.296298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.296384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.296410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.296493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.296520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.296610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.296639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.296738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.296767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.296865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.296895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.297017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.297044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.297142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.297169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.297455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.297489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.297634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.297660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.297791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.297832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.297980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.298025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.298178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.298231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.298380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.298431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.298548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.298574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.298699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.298725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.298822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.298860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.298956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.298993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.299129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.299186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.299272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.299298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.299381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.299408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.299503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.299531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.299647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.299686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.299799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.299830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.299944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.299972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.300060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.300087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.300217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.300267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.300397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.300424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.300559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.300609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.300701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.300730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.300868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.300895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.301058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.301102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.301229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.301272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.301428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.301465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.301600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.782 [2024-11-19 03:16:42.301626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.782 qpair failed and we were unable to recover it. 00:35:31.782 [2024-11-19 03:16:42.301727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.301766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.301875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.301902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.301988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.302016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.302155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.302195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.302292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.302320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.302466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.302492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.302601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.302627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.302730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.302757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.302845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.302876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.302998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.303027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.303142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.303202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.303372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.303419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.303499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.303537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.303613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.303639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.303720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.303748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.303865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.303893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.304019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.304060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.304154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.304182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.304301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.304339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.304429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.304458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.304564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.304603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.304746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.304775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.304921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.304948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.305040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.305067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.305179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.305206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.305287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.305316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.305459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.305508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.305624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.305652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.305818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.305846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.305944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.305970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.306090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.306116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.306203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.306231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.306381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.306409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.306499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.306530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.306618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.306646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.783 [2024-11-19 03:16:42.306747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.783 [2024-11-19 03:16:42.306776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.783 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.306888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.306915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.307041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.307067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.307182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.307209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.307325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.307353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.307500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.307529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.307675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.307711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.307799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.307825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.307907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.307934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.308063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.308097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.308339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.308373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.308524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.308570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.308678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.308719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.308835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.308864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.308958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.308985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.309120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.309170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.309304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.309383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.309473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.309502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.309593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.309621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.309727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.309757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.309848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.309875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.309968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.309994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.310073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.310100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.310187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.310218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.310336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.310363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.310484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.310511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.310617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.310644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.310785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.784 [2024-11-19 03:16:42.310815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.784 qpair failed and we were unable to recover it. 00:35:31.784 [2024-11-19 03:16:42.310905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.310932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.311081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.311130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.311261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.311297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.311413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.311439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.311516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.311543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.311635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.311662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.311812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.311852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.311960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.311988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.312106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.312133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.312253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.312280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.312400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.312427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.312557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.312586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.312711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.312745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.312833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.312860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.312973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.312999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.313113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.313139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.313254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.313280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.313424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.313450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.313573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.313602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.313717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.313757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.313896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.313925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.314099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.314148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.314396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.314461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.314573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.314600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.314713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.314741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.314831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.314859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.314973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.315010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.315147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.315195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.315343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.315402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.315514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.315552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.315669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.315703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.315800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.315840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.315931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.315959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.316075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.316109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.316197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.316224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.316348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.316388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.316525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.316572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.785 [2024-11-19 03:16:42.316726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.785 [2024-11-19 03:16:42.316755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.785 qpair failed and we were unable to recover it. 00:35:31.786 [2024-11-19 03:16:42.316872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.786 [2024-11-19 03:16:42.316899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.786 qpair failed and we were unable to recover it. 00:35:31.786 [2024-11-19 03:16:42.317001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.786 [2024-11-19 03:16:42.317029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.786 qpair failed and we were unable to recover it. 00:35:31.786 [2024-11-19 03:16:42.317134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.786 [2024-11-19 03:16:42.317169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.786 qpair failed and we were unable to recover it. 00:35:31.786 [2024-11-19 03:16:42.317345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.786 [2024-11-19 03:16:42.317389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.786 qpair failed and we were unable to recover it. 00:35:31.786 [2024-11-19 03:16:42.317529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.786 [2024-11-19 03:16:42.317562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.786 qpair failed and we were unable to recover it. 00:35:31.786 [2024-11-19 03:16:42.317676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.786 [2024-11-19 03:16:42.317708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.786 qpair failed and we were unable to recover it. 00:35:31.786 [2024-11-19 03:16:42.317804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.786 [2024-11-19 03:16:42.317832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.786 qpair failed and we were unable to recover it. 00:35:31.786 [2024-11-19 03:16:42.317930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.786 [2024-11-19 03:16:42.317957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.786 qpair failed and we were unable to recover it. 00:35:31.786 [2024-11-19 03:16:42.318097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.786 [2024-11-19 03:16:42.318151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:31.786 qpair failed and we were unable to recover it. 00:35:31.786 [2024-11-19 03:16:42.318348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.786 [2024-11-19 03:16:42.318414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:31.786 qpair failed and we were unable to recover it. 00:35:31.786 [2024-11-19 03:16:42.318570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.786 [2024-11-19 03:16:42.318598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:31.786 qpair failed and we were unable to recover it. 00:35:31.786 [2024-11-19 03:16:42.318717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.786 [2024-11-19 03:16:42.318749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.318835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.318864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.318955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.318982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.319093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.319125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.319209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.319236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.319380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.319407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.319498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.319525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.319619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.319647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.319740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.319768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.319890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.319919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.320016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.320046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.320146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.320174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.320321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.320348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.320444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.320473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.320590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.320617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.320748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.320777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.320865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.320893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.320984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.321012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.321124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.321151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.321292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.321322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.321440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.321467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.321565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.321605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.321704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.321733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.321877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.321904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.321992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.322018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.322106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.322135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.322215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.322243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.322324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.322351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.322467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.322494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.322603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.322630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.322746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.322778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.322858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.322885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.322999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.323026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.323153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.323179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.323269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.323297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.080 qpair failed and we were unable to recover it. 00:35:32.080 [2024-11-19 03:16:42.323450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.080 [2024-11-19 03:16:42.323490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.323588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.323617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.323716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.323745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.323861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.323888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.324034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.324064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.324196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.324245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.324472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.324531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.324621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.324648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.324742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.324771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.324902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.324929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.325012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.325049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.325168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.325194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.325294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.325329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.325432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.325459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.325573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.325605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.325734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.325774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.325868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.325897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.326020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.326048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.326133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.326164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.326252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.326302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.326404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.326437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.326583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.326609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.326721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.326759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.326852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.326881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.326961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.326988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.327093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.327129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.327311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.327369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.327483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.327511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.327662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.327707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.327824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.327850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.327946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.327975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.328055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.328082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.328192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.328240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.328405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.328467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.328608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.328635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.328774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.328808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.328902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.328930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.329072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.081 [2024-11-19 03:16:42.329118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.081 qpair failed and we were unable to recover it. 00:35:32.081 [2024-11-19 03:16:42.329271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.329321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.329436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.329470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.329631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.329658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.329755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.329781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.329879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.329907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.330000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.330027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.330146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.330176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.330320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.330369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.330461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.330488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.330575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.330602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.330710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.330738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.330838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.330879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.330960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.330994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.331106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.331133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.331216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.331243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.331381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.331408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.331526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.331554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.331652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.331685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.331822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.331863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.331950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.331978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.332099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.332146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.332259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.332285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.332372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.332399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.332524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.332552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.332672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.332719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.332841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.332868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.332987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.333047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.333200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.333246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.333339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.333367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.333481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.333508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.333640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.333698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.333816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.333844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.333948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.333987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.334101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.334131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.334246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.334273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.334416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.334476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.334585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.334612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.082 [2024-11-19 03:16:42.334727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.082 [2024-11-19 03:16:42.334754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.082 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.334848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.334876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.335030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.335099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.335317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.335365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.335496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.335533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.335649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.335675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.335802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.335831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.335922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.335949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.336041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.336069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.336152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.336188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.336273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.336302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.336452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.336500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.336637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.336663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.336770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.336797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.336894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.336921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.337096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.337133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.337241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.337276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.337437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.337487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.337636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.337663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.337772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.337800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.337892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.337919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.338061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.338088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.338237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.338264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.338374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.338425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.338537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.338564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.338707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.338735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.338815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.338843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.338969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.339002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.339152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.339201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.339408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.339453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.339539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.339566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.339709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.339736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.339828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.339862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.339987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.340015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.340099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.340126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.340255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.340282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.340372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.340399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.340529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.340557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.083 qpair failed and we were unable to recover it. 00:35:32.083 [2024-11-19 03:16:42.340653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.083 [2024-11-19 03:16:42.340700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.340816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.340845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.340936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.340964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.341078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.341106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.341193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.341220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.341310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.341336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.341423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.341463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.341612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.341640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.341739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.341768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.341871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.341897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.341990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.342016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.342134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.342172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.342260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.342288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.342402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.342429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.342549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.342577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.342700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.342731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.342863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.342903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.343047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.343084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.343194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.343223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.343318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.343344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.343460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.343488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.343591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.343619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.343746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.343787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.343919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.343947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.344068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.344096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.344261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.344307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.344473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.344500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.344613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.344640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.344777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.344806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.344923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.344949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.345067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.084 [2024-11-19 03:16:42.345094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.084 qpair failed and we were unable to recover it. 00:35:32.084 [2024-11-19 03:16:42.345208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.345235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.345316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.345354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.345460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.345487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.345603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.345630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.345765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.345805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.345923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.345963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.346097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.346125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.346234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.346261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.346381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.346419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.346531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.346558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.346672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.346709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.346810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.346840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.346975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.347002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.347114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.347140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.347233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.347259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.347353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.347379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.347492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.347518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.347598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.347624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.347770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.347798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.347894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.347920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.348021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.348058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.348189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.348236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.348372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.348406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.348553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.348586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.348729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.348757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.348873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.348905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.348998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.349025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.349108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.349145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.349286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.349315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.349395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.349422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.349561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.349588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.349700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.349727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.349809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.349834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.349943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.349969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.350052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.350080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.350177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.350204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.350428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.350468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.085 qpair failed and we were unable to recover it. 00:35:32.085 [2024-11-19 03:16:42.350558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.085 [2024-11-19 03:16:42.350586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.350678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.350711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.350838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.350866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.350977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.351004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.351122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.351149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.351302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.351329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.351417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.351445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.351537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.351563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.351680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.351712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.351834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.351861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.351956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.351996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.352192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.352252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.352447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.352474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.352589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.352616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.352728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.352755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.352837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.352878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.352968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.352995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.353088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.353118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.353267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.353295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.353414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.353441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.353529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.353556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.353660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.353687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.353792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.353820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.353948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.353980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.354068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.354095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.354177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.354203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.354469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.354544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.354725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.354752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.354867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.354893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.354980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.355007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.355118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.355158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.355300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.355350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.355474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.355501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.355617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.355648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.355737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.355764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.355887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.355913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.356000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.356027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.356126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.356153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.086 qpair failed and we were unable to recover it. 00:35:32.086 [2024-11-19 03:16:42.356239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.086 [2024-11-19 03:16:42.356267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.356365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.356394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.356515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.356542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.356653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.356698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.356825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.356853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.356935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.356987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.357162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.357223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.357521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.357564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.357774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.357801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.357939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.357965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.358253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.358304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.358479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.358548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.358732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.358759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.358865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.358891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.358993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.359020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.359140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.359205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.359369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.359449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.359609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.359635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.359779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.359806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.359921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.359947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.360041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.360089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.360307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.360338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.360530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.360558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.360647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.360743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.360866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.360892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.361000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.361027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.361198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.361261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.361412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.361438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.361625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.361652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.361787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.361814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.361908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.361945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.362042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.362082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.362245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.362302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.362455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.362511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.362620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.362647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.362772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.362800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.362923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.362949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.363117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.087 [2024-11-19 03:16:42.363172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.087 qpair failed and we were unable to recover it. 00:35:32.087 [2024-11-19 03:16:42.363361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.363414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.363533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.363560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.363701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.363729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.363854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.363881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.363968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.363994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.364076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.364103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.364265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.364328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.364619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.364646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.364806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.364835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.364923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.364951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.365047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.365074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.365160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.365187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.365306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.365333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.365466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.365506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.365631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.365659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.365785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.365813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.365952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.365979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.366058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.366085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.366300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.366365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.366574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.366601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.366738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.366766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.366846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.366873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.366962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.366989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.367070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.367096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.367184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.367215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.367407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.367434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.367599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.367640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.367751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.367781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.367871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.367898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.368025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.368053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.368137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.368165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.368386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.368438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.368564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.368590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.368734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.368766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.368861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.368889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.368999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.369036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.369145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.369172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.088 qpair failed and we were unable to recover it. 00:35:32.088 [2024-11-19 03:16:42.369263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.088 [2024-11-19 03:16:42.369292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.369396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.369422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.369583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.369624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.369713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.369743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.369839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.369867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.369969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.369996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.370118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.370146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.370277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.370335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.370420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.370458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.370564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.370591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.370703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.370744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.370860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.370887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.371005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.371032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.371185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.371211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.371324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.371350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.371450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.371479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.371626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.371654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.371784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.371825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.371921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.371949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.372085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.372155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.372301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.372353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.372568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.372622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.372810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.372851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.372982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.373011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.373230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.373285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.373511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.373564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.373659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.373685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.373819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.373846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.373933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.373972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.374166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.374193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.374359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.374414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.374505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.374532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.089 [2024-11-19 03:16:42.374621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.089 [2024-11-19 03:16:42.374648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.089 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.374774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.374801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.374890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.374918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.375026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.375061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.375200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.375230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.375307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.375335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.375463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.375503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.375662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.375700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.375821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.375850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.375964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.375992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.376111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.376149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.376276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.376304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.376396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.376422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.376512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.376539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.376651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.376696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.376810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.376837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.376967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.377007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.377137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.377165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.377268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.377295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.377376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.377403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.377495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.377523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.377626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.377665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.377799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.377828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.377947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.377977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.378125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.378153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.378239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.378265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.378413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.378442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.378532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.378560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.378669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.378704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.378802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.378878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.379130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.379206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.379454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.379532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.379731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.379759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.379845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.379872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.380053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.380118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.380405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.380480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.380662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.380696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.380785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.380811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.090 qpair failed and we were unable to recover it. 00:35:32.090 [2024-11-19 03:16:42.380922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.090 [2024-11-19 03:16:42.380948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.381105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.381131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.381275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.381331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.381510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.381581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.381674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.381715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.381804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.381831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.381921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.381948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.382093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.382131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.382295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.382352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.382484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.382538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.382621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.382650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.382791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.382818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.382908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.382935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.383020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.383046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.383134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.383160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.383245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.383272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.383385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.383426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.383526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.383567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.383708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.383736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.383819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.383845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.383943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.383982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.384137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.384164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.384246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.384274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.384365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.384393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.384484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.384513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.384624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.384655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.384771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.384799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.384915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.384943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.385041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.385068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.385148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.385175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.385318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.385345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.385455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.385482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.385566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.385595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.385680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.385718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.385842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.385882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.386004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.386032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.386118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.386145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.386300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.091 [2024-11-19 03:16:42.386327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.091 qpair failed and we were unable to recover it. 00:35:32.091 [2024-11-19 03:16:42.386468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.386526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.386644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.386671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.386781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.386810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.386900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.386926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.387010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.387037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.387125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.387152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.387262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.387297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.387370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.387396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.387526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.387568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.387698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.387727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.387818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.387853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.387937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.387964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.388038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.388072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.388271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.388334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.388524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.388551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.388702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.388733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.388839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.388879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.389021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.389106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.389344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.389395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.389491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.389519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.389611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.389637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.389735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.389764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.389844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.389876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.389986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.390015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.390143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.390206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.390403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.390477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.390570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.390600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.390720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.390748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.390836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.390863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.390946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.390971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.391084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.391123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.391226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.391253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.391362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.391389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.391481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.391511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.391626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.391654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.391742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.391770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.391865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.092 [2024-11-19 03:16:42.391893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.092 qpair failed and we were unable to recover it. 00:35:32.092 [2024-11-19 03:16:42.392011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.392039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.392163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.392199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.392431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.392494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.392637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.392666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.392781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.392822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.392914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.392942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.393029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.393056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.393146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.393174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.393295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.393321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.393462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.393489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.393581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.393608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.393733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.393761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.393873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.393901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.394022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.394049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.394194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.394221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.394309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.394335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.394431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.394459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.394596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.394636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.394741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.394770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.394886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.394913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.395086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.395152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.395390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.395445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.395536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.395567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.395677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.395709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.395803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.395829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.395933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.395964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.396073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.396112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.396253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.396279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.396398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.396424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.396549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.396574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.396707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.396748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.396863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.396891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.397013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.397054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.397148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.397177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.093 qpair failed and we were unable to recover it. 00:35:32.093 [2024-11-19 03:16:42.397340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.093 [2024-11-19 03:16:42.397396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.397488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.397516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.397631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.397658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.397790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.397818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.397907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.397934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.398022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.398049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.398169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.398196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.398282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.398309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.398423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.398451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.398534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.398561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.398703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.398744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.398834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.398862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.398947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.398974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.399060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.399086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.399209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.399238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.399463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.399519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.399597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.399625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.399757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.399785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.399873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.399906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.399988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.400016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.400101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.400127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.400257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.400297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.400391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.400419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.400508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.400536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.400650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.400677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.400776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.400803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.400908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.400977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.401336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.401413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.401583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.401610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.401726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.401755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.401845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.401874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.402112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.402166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.402328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.402382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.402470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.402497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.402587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.402613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.402705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.402733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.402818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.402845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.094 qpair failed and we were unable to recover it. 00:35:32.094 [2024-11-19 03:16:42.402954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.094 [2024-11-19 03:16:42.402981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.403096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.403160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.403434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.403498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.403661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.403695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.403817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.403845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.403935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.403962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.404095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.404160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.404290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.404350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.404499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.404528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.404624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.404665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.404791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.404819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.404962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.405031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.405369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.405420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.405627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.405678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.405822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.405851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.405974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.406002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.406225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.406288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.406379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.406406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.406487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.406514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.406599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.406627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.406767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.406794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.406888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.406915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.407001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.407028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.407107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.407145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.407256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.407282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.407376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.407402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.407558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.407599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.407726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.407755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.407880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.407919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.408058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.408121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.408301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.408329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.408414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.408445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.408560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.408588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.408682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.408732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.408878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.408906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.409026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.409052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.409167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.409195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.409283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.095 [2024-11-19 03:16:42.409311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.095 qpair failed and we were unable to recover it. 00:35:32.095 [2024-11-19 03:16:42.409401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.409428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.409533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.409560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.409672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.409710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.409804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.409831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.409921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.409949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.410084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.410129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.410217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.410245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.410370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.410397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.410513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.410540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.410671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.410720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.410834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.410868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.410967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.411008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.411084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.411122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.411216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.411242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.411354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.411382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.411500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.411528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.411685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.411742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.411831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.411861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.411948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.411975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.412105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.412132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.412272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.412299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.412410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.412436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.412519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.412546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.412670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.412718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.412836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.412864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.412998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.413039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.413202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.413261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.413353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.413380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.413454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.413481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.413618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.413655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.413793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.413834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.413979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.414053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.414299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.414369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.414543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.414570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.414661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.414701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.414813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.414844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.414941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.414969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.415195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.096 [2024-11-19 03:16:42.415274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.096 qpair failed and we were unable to recover it. 00:35:32.096 [2024-11-19 03:16:42.415576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.415602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.415746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.415780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.415895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.415922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.416063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.416090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.416246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.416292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.416442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.416535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.416645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.416671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.416762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.416789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.416915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.416941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.417098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.417164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.417392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.417445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.417563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.417590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.417714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.417742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.417869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.417896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.417983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.418010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.418092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.418119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.418195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.418222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.418411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.418438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.418567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.418593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.418709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.418749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.418882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.418911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.419060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.419129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.419312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.419339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.419483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.419510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.419596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.419623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.419740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.419767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.419912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.419939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.420045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.420071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.420153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.420179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.420274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.420300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.420407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.420434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.420580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.420607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.420702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.420729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.420838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.420865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.420956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.420988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.421068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.421094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.421206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.421232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.097 [2024-11-19 03:16:42.421321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.097 [2024-11-19 03:16:42.421349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.097 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.421453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.421494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.421587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.421622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.421752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.421781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.421910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.421936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.422127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.422188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.422412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.422463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.422580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.422606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.422704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.422731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.422872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.422899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.423017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.423045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.423160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.423187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.423276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.423302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.423384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.423411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.423501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.423527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.423637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.423663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.423792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.423818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.423925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.423952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.424111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.424137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.424260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.424286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.424401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.424428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.424545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.424572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.424729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.424759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.424842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.424869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.425037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.425089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.425316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.425369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.425462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.425488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.425606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.425632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.425782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.425810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.425897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.425924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.426049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.426077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.426225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.426251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.426375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.426414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.098 [2024-11-19 03:16:42.426498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.098 [2024-11-19 03:16:42.426526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.098 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.426613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.426640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.426744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.426771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.426864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.426891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.426974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.427003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.427195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.427260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.427532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.427558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.427774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.427803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.427890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.427919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.428010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.428042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.428185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.428212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.428434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.428496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.428577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.428603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.428725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.428752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.428866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.428894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.429014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.429041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.429128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.429154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.429240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.429267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.429430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.429471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.429571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.429610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.429709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.429738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.429855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.429882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.430026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.430053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.430216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.430267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.430427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.430478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.430558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.430585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.430674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.430724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.430848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.430876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.431004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.431034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.431126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.431155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.431306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.431359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.431471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.431498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.431613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.431641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.431798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.431829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.431941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.431980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.432122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.432182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.432418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.432473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.432555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.432582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.099 qpair failed and we were unable to recover it. 00:35:32.099 [2024-11-19 03:16:42.432752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.099 [2024-11-19 03:16:42.432793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.432891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.432920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.433050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.433102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.433181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.433208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.433444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.433495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.433606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.433633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.433760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.433788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.433904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.433931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.434018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.434045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.434160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.434187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.434302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.434365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.434483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.434514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.434621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.434648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.434798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.434826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.434920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.434948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.435070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.435097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.435207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.435233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.435372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.435400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.435559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.435599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.435720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.435748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.435837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.435864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.436006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.436033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.436144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.436171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.436357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.436384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.436527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.436553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.436701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.436729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.436841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.436868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.437037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.437091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.437239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.437290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.437496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.437522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.437641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.437668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.437801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.437841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.437932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.437960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.438086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.438113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.438248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.438294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.438433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.438460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.438573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.438600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.438679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.100 [2024-11-19 03:16:42.438719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.100 qpair failed and we were unable to recover it. 00:35:32.100 [2024-11-19 03:16:42.438825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.438865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.438996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.439024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.439142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.439171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.439294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.439322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.439440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.439469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.439581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.439608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.439729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.439758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.439897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.439924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.440095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.440155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.440295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.440359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.440465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.440492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.440575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.440601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.440750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.440778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.440916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.440943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.441056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.441083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.441197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.441224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.441318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.441347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.441463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.441490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.441628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.441655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.441786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.441814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.441971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.442014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.442133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.442161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.442270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.442296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.442412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.442439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.442572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.442612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.442717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.442747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.442864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.442892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.443129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.443180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.443326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.443378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.443468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.443497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.443649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.443676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.443800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.443827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.443916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.443946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.444123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.444192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.444481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.444546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.444757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.444784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.444874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.444901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.101 [2024-11-19 03:16:42.445015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.101 [2024-11-19 03:16:42.445041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.101 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.445261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.445324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.445532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.445595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.445764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.445808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.445938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.445967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.446159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.446215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.446403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.446454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.446540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.446567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.446686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.446722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.446809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.446836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.446933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.446960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.447101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.447127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.447340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.447368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.447535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.447574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.447705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.447735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.447822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.447849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.447965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.447993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.448091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.448119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.448207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.448234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.448346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.448373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.448455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.448481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.448590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.448618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.448709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.448737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.448822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.448848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.448987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.449014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.449236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.449301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.449416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.449443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.449532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.449560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.449706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.449735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.449874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.449914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.450064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.450092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.450178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.450204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.450347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.450374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.450482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.450509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.450645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.450685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.450790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.450819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.450963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.451029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.451339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.451403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.102 [2024-11-19 03:16:42.451592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.102 [2024-11-19 03:16:42.451619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.102 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.451755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.451782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.451876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.451904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.452022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.452048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.452136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.452162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.452282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.452308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.452400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.452427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.452539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.452565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.452687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.452721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.452838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.452864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.452990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.453030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.453258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.453314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.453429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.453456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.453592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.453619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.453730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.453757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.453895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.453922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.454029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.454056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.454213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.454280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.454453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.454505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.454623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.454650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.454788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.454816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.454905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.454931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.455043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.455069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.455157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.455184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.455326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.455353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.455470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.455496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.455583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.455611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.455722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.455750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.455844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.455872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.455985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.456012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.456125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.456151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.456270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.456299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.456374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.103 [2024-11-19 03:16:42.456406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.103 qpair failed and we were unable to recover it. 00:35:32.103 [2024-11-19 03:16:42.456515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.456542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.456673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.456719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.456868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.456896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.457045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.457104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.457219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.457246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.457361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.457389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.457485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.457525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.457665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.457698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.457791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.457818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.457904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.457932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.458045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.458071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.458211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.458266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.458461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.458489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.458582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.458609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.458724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.458751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.458866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.458893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.459017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.459058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.459217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.459262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.459407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.459474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.459615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.459642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.459763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.459791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.460013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.460069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.460241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.460296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.460387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.460414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.460540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.460567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.460686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.460723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.460850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.460877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.461014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.461072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.461237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.461289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.461403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.461430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.461545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.461574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.461694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.461724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.461866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.461892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.462056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.462110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.462200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.462227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.462393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.462434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.462550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.462578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.462725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.104 [2024-11-19 03:16:42.462752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.104 qpair failed and we were unable to recover it. 00:35:32.104 [2024-11-19 03:16:42.462866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.462893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.463055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.463121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.463333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.463384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.463467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.463494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.463613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.463639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.463786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.463813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.463948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.463974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.464089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.464116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.464223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.464250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.464393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.464421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.464515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.464543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.464619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.464646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.464729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.464756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.464872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.464899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.465007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.465034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.465157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.465183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.465300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.465340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.465470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.465510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.465628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.465655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.465781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.465808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.465894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.465921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.466038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.466064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.466257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.466285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.466470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.466496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.466619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.466649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.466781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.466810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.466897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.466925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.467040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.467067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.467224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.467277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.467455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.467482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.467567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.467595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.467710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.467738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.467858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.467885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.467996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.468054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.468169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.468196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.468347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.468387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.468512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.468540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.105 [2024-11-19 03:16:42.468655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.105 [2024-11-19 03:16:42.468684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.105 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.468809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.468836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.468951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.468979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.469124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.469151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.469235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.469267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.469352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.469380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.469460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.469488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.469605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.469632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.469786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.469815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.469968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.470031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.470213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.470268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.470435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.470490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.470602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.470628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.470776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.470804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.470910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.470937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.471047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.471075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.471305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.471362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.471521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.471593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.471715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.471742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.471852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.471879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.471970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.471996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.472105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.472132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.472214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.472242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.472386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.472413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.472531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.472560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.472678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.472713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.472806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.472833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.472941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.472968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.473077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.473104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.473243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.473268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.473356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.473383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.473473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.473501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.473657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.473703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.473801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.473829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.473971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.473998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.474139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.474165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.474278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.474304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.474421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.106 [2024-11-19 03:16:42.474448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.106 qpair failed and we were unable to recover it. 00:35:32.106 [2024-11-19 03:16:42.474528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.474555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.474661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.474708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.474835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.474862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.474961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.475002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.475136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.475165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.475329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.475383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.475494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.475521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.475641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.475668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.475809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.475848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.476003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.476059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.476230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.476288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.476455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.476506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.476620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.476647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.476734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.476762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.476879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.476907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.477003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.477030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.477117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.477144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.477260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.477289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.477435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.477462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.477581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.477608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.477729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.477758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.477855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.477894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.478022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.478050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.478197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.478225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.478340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.478367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.478478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.478504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.478613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.478639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.478733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.478761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.478857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.478884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.478967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.478994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.479107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.479134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.479274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.479300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.107 [2024-11-19 03:16:42.479443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.107 [2024-11-19 03:16:42.479471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.107 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.479585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.479618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.479737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.479765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.479875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.479902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.480041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.480067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.480150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.480177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.480295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.480322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.480468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.480494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.480612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.480639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.480757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.480786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.480900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.480929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.481079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.481119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.481333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.481400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.481616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.481642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.481735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.481762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.481878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.481906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.482020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.482046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.482134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.482163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.482322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.482375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.482517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.482544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.482618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.482645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.482746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.482774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.482890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.482917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.483037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.483064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.483213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.483239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.483350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.483377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.483519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.483545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.483643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.483683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.483829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.483869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.483963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.483992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.484126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.484176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.484322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.484380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.484496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.484524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.484670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.484703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.484848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.484874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.485007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.485057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.485289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.485342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.485426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.485453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.108 qpair failed and we were unable to recover it. 00:35:32.108 [2024-11-19 03:16:42.485568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.108 [2024-11-19 03:16:42.485596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.485685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.485719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.485836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.485863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.485949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.485983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.486071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.486098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.486263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.486315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.486537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.486563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.486681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.486716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.486811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.486838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.486928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.486955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.487058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.487127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.487356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.487407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.487550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.487576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.487666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.487698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.487784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.487812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.487891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.487918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.488058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.488085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.488249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.488312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.488428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.488457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.488572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.488601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.488722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.488762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.488864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.488892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.488977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.489039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.489377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.489441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.489620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.489648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.489769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.489797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.489881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.489910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.490058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.490085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.490207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.490260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.490344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.490371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.490498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.490528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.490640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.490666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.490770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.490811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.490961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.490989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.491103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.491130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.491205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.491231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.491446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.491504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.491619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.109 [2024-11-19 03:16:42.491646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.109 qpair failed and we were unable to recover it. 00:35:32.109 [2024-11-19 03:16:42.491745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.491773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.491882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.491909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.491999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.492027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.492182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.492230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.492386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.492413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.492528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.492561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.492675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.492708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.492815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.492842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.492922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.492948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.493035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.493062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.493190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.493239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.493348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.493376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.493487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.493514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.493607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.493634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.493753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.493780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.493889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.493917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.493998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.494025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.494168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.494196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.494308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.494334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.494484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.494511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.494622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.494648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.494756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.494796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.494921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.494952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.495043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.495070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.495167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.495234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.495444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.495497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.495626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.495667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.495807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.495836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.495956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.495983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.496077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.496105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.496226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.496255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.496373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.496400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.496510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.496541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.496658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.496684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.496804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.496831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.496912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.496938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.497025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.497052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.110 [2024-11-19 03:16:42.497171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.110 [2024-11-19 03:16:42.497198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.110 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.497284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.497310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.497426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.497456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.497597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.497624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.497734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.497762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.497878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.497905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.498033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.498084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.498201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.498228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.498318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.498346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.498468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.498496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.498618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.498645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.498792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.498821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.498940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.498966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.499082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.499108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.499221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.499248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.499333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.499360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.499493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.499533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.499659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.499687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.499792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.499820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.499932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.499958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.500101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.500128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.500291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.500343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.500489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.500516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.500629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.500656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.500810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.500838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.500926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.500953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.501036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.501064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.501268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.501319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.501461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.501498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.501573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.501599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.501744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.501772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.501860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.501889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.502004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.502031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.502144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.502172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.502260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.502288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.502397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.502428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.502573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.502600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.502710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.111 [2024-11-19 03:16:42.502738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.111 qpair failed and we were unable to recover it. 00:35:32.111 [2024-11-19 03:16:42.502852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.502878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.502973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.503000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.503119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.503146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.503262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.503288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.503410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.503438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.503553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.503582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.503679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.503730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.503881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.503909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.504058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.504085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.504194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.504221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.504351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.504460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.504673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.504707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.504804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.504831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.504915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.504943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.505040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.505067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.505186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.505213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.505294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.505320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.505433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.505460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.505572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.505600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.509848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.509891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.510060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.510091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.510287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.510315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.510449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.510477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.510596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.510622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.510749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.510777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.510887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.510914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.511028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.511066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.511173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.511201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.511317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.511343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.511454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.511482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.511568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.511595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.511724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.511765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.511871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.511911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.112 qpair failed and we were unable to recover it. 00:35:32.112 [2024-11-19 03:16:42.512074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.112 [2024-11-19 03:16:42.512102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.512216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.512243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.512362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.512389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.512508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.512548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.512650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.512683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.512783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.512810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.512926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.512953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.513072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.513099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.513190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.513217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.513337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.513365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.513480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.513510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.513631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.513659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.513815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.513855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.514012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.514081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.514348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.514414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.514565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.514592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.514682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.514712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.514800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.514826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.514910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.514937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.515047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.515117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.515364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.515429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.515632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.515660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.515825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.515855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.515975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.516002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.516094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.516121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.516288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.516339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.516467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.516516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.516608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.516637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.516745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.516773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.516920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.516947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.517168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.517236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.517484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.517561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.517771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.517798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.517883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.517910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.518029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.518067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.518167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.518194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.518365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.518415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.518507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.113 [2024-11-19 03:16:42.518534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.113 qpair failed and we were unable to recover it. 00:35:32.113 [2024-11-19 03:16:42.518636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.518662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.518839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.518879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.519017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.519071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.519242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.519298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.519477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.519532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.519652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.519679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.519829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.519856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.520034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.520091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.520285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.520313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.520527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.520580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.520662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.520695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.520813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.520840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.520925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.520952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.521069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.521097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.521213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.521239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.521315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.521340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.521480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.521508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.521641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.521678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.521778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.521805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.521921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.521950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.522074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.522101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.522256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.522329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.522425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.522454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.522614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.522655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.522792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.522821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.522937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.522964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.523151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.523215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.523467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.523540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.523725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.523760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.523855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.523883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.523977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.524010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.524192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.524248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.524472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.524500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.524631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.524679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.524815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.524850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.524967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.524995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.525135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.525162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.525252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.114 [2024-11-19 03:16:42.525278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.114 qpair failed and we were unable to recover it. 00:35:32.114 [2024-11-19 03:16:42.525370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.525397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.525512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.525539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.525629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.525656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.525756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.525786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.525900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.525927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.526109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.526160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.526375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.526428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.526550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.526590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.526725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.526756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.526919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.526959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.527195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.527223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.527410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.527477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.527667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.527699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.527793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.527820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.527937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.527963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.528103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.528130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.528246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.528272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.528457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.528520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.528699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.528741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.528877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.528917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.529029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.529058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.529201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.529229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.529379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.529438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.529552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.529581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.529701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.529728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.529852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.529884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.529986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.530026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.530136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.530177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.530375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.530444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.530637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.530664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.530805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.530834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.530946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.531025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.531189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.531254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.531392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.531464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.531652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.531682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.531832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.531860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.531954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.531980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.115 [2024-11-19 03:16:42.532070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.115 [2024-11-19 03:16:42.532098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.115 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.532186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.532211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.532318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.532383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.532465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.532491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.532632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.532659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.532789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.532817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.532943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.532971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.533077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.533105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.533245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.533273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.533359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.533385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.533516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.533557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.533683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.533719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.533830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.533871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.534017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.534045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.534166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.534193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.534302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.534329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.534415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.534443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.534601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.534641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.534749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.534790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.534887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.534916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.535001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.535028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.535107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.535135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.535223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.535250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.535344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.535372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.535524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.535554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.535645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.535678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.535790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.535819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.535963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.535990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.536139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.536206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.536454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.536481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.536601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.536627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.536766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.536794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.536906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.536933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.537020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.537045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.537242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.537306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.537470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.537497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.537626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.116 [2024-11-19 03:16:42.537667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.116 qpair failed and we were unable to recover it. 00:35:32.116 [2024-11-19 03:16:42.537826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.537856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.537964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.537992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.538112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.538140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.538235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.538265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.538384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.538411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.538555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.538583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.538706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.538735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.538821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.538848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.538959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.538987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.539078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.539104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.539191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.539217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.539301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.539326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.539443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.539471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.539635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.539676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.539777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.539804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.539895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.539928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.540037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.540064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.540199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.540264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.540348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.540376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.540502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.540543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.540639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.540669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.540796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.540824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.540912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.540938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.541051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.541079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.541209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.541263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.541349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.541376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.541496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.541527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.541616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.541643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.541800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.541828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.541953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.541980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.542126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.542153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.542240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.542266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.542376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.542404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.542534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.117 [2024-11-19 03:16:42.542574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.117 qpair failed and we were unable to recover it. 00:35:32.117 [2024-11-19 03:16:42.542701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.542730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.542819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.542847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.542936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.542964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.543154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.543204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.543419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.543475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.543588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.543616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.543744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.543784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.543886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.543927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.544122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.544151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.544371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.544433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.544552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.544579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.544669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.544701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.544789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.544814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.544912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.544940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.545034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.545060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.545141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.545168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.545245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.545272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.545380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.545407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.545517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.545544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.545694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.545722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.545803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.545829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.545994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.546055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.546147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.546173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.546287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.546314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.546459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.546489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.546616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.546656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.546757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.546785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.546874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.546903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.547020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.547049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.547181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.547222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.547400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.547462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.547577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.547606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.547730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.547758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.547938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.548011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.548158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.548206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.548366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.548419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.548537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.548564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.548657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.118 [2024-11-19 03:16:42.548683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.118 qpair failed and we were unable to recover it. 00:35:32.118 [2024-11-19 03:16:42.548804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.548832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.548926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.548955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.549071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.549097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.549187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.549214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.549325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.549352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.549465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.549495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.549578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.549610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.549704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.549733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.549817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.549843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.550006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.550060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.550296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.550354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.550537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.550565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.550684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.550717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.550811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.550838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.550977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.551004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.551147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.551174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.551265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.551294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.551406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.551435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.551553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.551581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.551663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.551695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.551781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.551809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.552078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.552176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.552372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.552442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.552621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.552650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.552755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.552784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.552869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.552895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.552983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.553011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.553125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.553153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.553278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.553307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.553424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.553451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.553588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.553629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.553754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.553784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.553876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.553904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.554053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.554108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.554192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.554220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.554388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.554428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.554640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.554668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.119 [2024-11-19 03:16:42.554773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.119 [2024-11-19 03:16:42.554801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.119 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.554913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.554940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.555053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.555119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.555306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.555362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.555454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.555483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.555565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.555592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.555676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.555711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.555804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.555834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.555946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.555973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.556085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.556113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.556204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.556231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.556324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.556363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.556493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.556534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.556634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.556667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.556770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.556798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.556889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.556917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.557002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.557031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.557139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.557166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.557252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.557282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.557377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.557405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.557523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.557551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.557638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.557664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.557762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.557795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.557881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.557910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.558021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.558049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.558137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.558164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.558246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.558272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.558415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.558443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.558586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.558614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.558744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.558785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.558936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.558993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.559080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.559107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.559266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.559321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.559489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.559545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.559660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.559687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.559811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.559838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.559917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.559941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.560062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.560127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.120 [2024-11-19 03:16:42.560352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.120 [2024-11-19 03:16:42.560403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.120 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.560522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.560549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.560674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.560706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.560821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.560849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.560950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.560977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.561091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.561118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.561206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.561233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.561360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.561386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.561480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.561507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.561620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.561648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.561756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.561796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.561896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.561926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.562037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.562063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.562155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.562183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.562261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.562286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.562374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.562406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.562493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.562521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.562615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.562656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.562759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.562799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.562894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.562923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.563077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.563103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.563187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.563212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.563296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.563322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.563404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.563432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.563529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.563570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.563685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.563721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.563864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.563891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.563967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.564013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.564271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.564339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.564541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.564570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.564718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.564746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.564854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.564886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.564986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.565014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.565129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.121 [2024-11-19 03:16:42.565156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.121 qpair failed and we were unable to recover it. 00:35:32.121 [2024-11-19 03:16:42.565294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.565348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.565487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.565514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.565629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.565657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.565743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.565770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.565860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.565888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.565982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.566009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.566097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.566125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.566215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.566241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.566364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.566426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.566512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.566541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.566641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.566681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.566812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.566842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.566979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.567034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.567148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.567175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.567291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.567318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.567406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.567435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.567557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.567586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.567717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.567746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.567833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.567859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.567935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.567962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.568045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.568071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.568202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.568254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.568461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.568510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.568616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.568657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.568788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.568817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.568941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.568968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.569055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.569082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.569192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.569219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.569305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.569332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.569486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.569528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.569617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.569647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.569782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.569811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.569895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.569921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.570032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.570060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.570169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.570196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.570325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.570355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.570446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.122 [2024-11-19 03:16:42.570474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.122 qpair failed and we were unable to recover it. 00:35:32.122 [2024-11-19 03:16:42.570600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.570628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.570713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.570739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.570860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.570887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.570965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.570990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.571066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.571092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.571200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.571226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.571344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.571372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.571450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.571478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.571595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.571623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.571722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.571752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.571849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.571877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.572012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.572058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.572154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.572182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.572323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.572390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.572533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.572561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.572642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.572667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.572775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.572805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.572912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.572988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.573155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.573216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.573404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.573455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.573595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.573624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.573770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.573801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.573895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.573923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.574094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.574146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.574286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.574313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.574406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.574435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.574537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.574565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.574673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.574708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.574827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.574853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.574965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.574992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.575077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.575103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.575181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.575206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.575291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.575319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.575435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.575462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.575608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.575635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.575723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.575748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.575835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.575863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.575942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.575967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.576131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.576207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.576374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.576450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.576601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.123 [2024-11-19 03:16:42.576629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.123 qpair failed and we were unable to recover it. 00:35:32.123 [2024-11-19 03:16:42.576747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.576776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.576861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.576886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.577035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.577065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.577177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.577206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.577339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.577410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.577531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.577571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.577709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.577739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.577861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.577888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.577975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.578045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.578190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.578266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.578412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.578488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.578681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.578715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.578796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.578821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.578904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.578928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.579071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.579136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.579443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.579508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.579646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.579673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.579800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.579829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.579942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.579969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.580080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.580106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.580223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.580249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.580339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.580367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.580482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.580508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.580588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.580616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.580749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.580795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.580892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.580920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.581013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.581042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.581200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.581251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.581420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.581471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.581563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.581590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.581710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.581738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.581824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.581850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.581973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.582002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.582185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.582213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.582389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.582441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.582554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.582593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.582720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.582746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.582859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.582885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.582980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.124 [2024-11-19 03:16:42.583008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.124 qpair failed and we were unable to recover it. 00:35:32.124 [2024-11-19 03:16:42.583094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.583120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.583209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.583236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.583345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.583374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.583490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.583516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.583609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.583649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.583783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.583813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.583933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.583963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.584083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.584110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.584193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.584218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.584302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.584329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.584442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.584469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.584581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.584609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.584743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.584774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.584886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.584914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.584994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.585021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.585106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.585134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.585255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.585282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.585371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.585400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.585509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.585535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.585695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.585736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.585835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.585863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.585980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.586009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.586127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.586190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.586342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.586395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.586510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.586538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.586623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.586657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.586812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.586843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.586962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.586990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.587169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.587196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.587380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.587435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.587516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.587542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.587625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.587652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.587768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.587809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.587923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.587974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.588157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.588226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.588559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.588624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.588799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.588828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.588912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.588938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.125 [2024-11-19 03:16:42.589075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.125 [2024-11-19 03:16:42.589102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.125 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.589267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.589330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.589477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.589507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.589654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.589683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.589786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.589814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.589926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.589954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.590122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.590173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.590322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.590373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.590539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.590568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.590735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.590763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.590875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.590902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.590980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.591056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.591218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.591268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.591422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.591449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.591555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.591586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.591733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.591761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.591842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.591867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.592000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.592057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.592240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.592293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.592508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.592547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.592705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.592735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.592855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.592884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.592999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.593026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.593221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.593290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.593592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.126 [2024-11-19 03:16:42.593658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.126 qpair failed and we were unable to recover it. 00:35:32.126 [2024-11-19 03:16:42.593839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.593867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.593952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.593980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.594098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.594158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.594354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.594382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.594499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.594527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.594656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.594703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.594825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.594854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.594983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.595012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.595156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.595217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.595369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.595425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.595507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.595534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.595653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.595682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.595793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.595820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.595905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.595929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.596037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.596063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.596170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.596210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.596311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.596340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.596456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.596483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.596576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.596603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.596686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.596723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.596807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.596833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.596915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.596940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.597045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.597110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.597251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.597340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.597613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.597659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.597810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.597838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.597925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.597955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.598042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.598119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.598288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.598352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.598594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.598659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.598815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.598853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.598967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.598994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.599272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.599338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.599490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.599562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.599773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.599800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.599899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.599924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.600039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.600065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.600160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.600201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.600392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.600455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.600579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.127 [2024-11-19 03:16:42.600612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.127 qpair failed and we were unable to recover it. 00:35:32.127 [2024-11-19 03:16:42.600743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.600771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.600850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.600877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.600994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.601021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.601142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.601169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.601261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.601288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.601374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.601401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.601516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.601544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.601657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.601685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.601780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.601808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.601928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.601957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.602073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.602100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.602213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.602240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.602349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.602376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.602494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.602521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.602621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.602661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.602796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.602826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.602951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.602997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.603242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.603298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.603487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.603516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.603659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.603687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.603788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.603816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.603927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.603955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.604074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.604101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.604270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.604297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.604406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.604433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.604547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.604586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.604713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.604741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.604836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.604864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.605009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.605039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.605157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.605185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.605312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.605373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.605465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.605492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.605577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.605604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.605711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.605739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.605859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.605888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.605986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.606027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.606146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.606174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.606265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.606331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.606569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.128 [2024-11-19 03:16:42.606635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.128 qpair failed and we were unable to recover it. 00:35:32.128 [2024-11-19 03:16:42.606848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.606876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.606970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.606999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.607116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.607144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.607309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.607371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.607509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.607552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.607662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.607696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.607815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.607842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.607960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.607987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.608075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.608102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.608221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.608249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.608339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.608365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.608449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.608476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.608588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.608615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.608733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.608760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.608840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.608866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.608969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.609010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.609196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.609251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.609397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.609447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.609542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.609569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.609677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.609714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.609828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.609855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.609942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.609974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.610108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.610177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.610273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.610300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.610415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.610444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.610532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.610558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.610702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.610730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.610823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.610850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.610936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.610962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.611052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.611079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.611236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.611300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.611574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.611634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.611715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.611741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.611849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.611876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.612022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.612076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.612198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.612261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.612496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.612550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.612671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.612707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.612826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.612853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.612970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.613049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.129 [2024-11-19 03:16:42.613292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.129 [2024-11-19 03:16:42.613358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.129 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.613670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.613750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.613881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.613907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.614077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.614132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.614261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.614314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.614418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.614485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.614591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.614618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.614752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.614793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.614916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.614944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.615140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.615206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.615505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.615551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.615680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.615720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.615842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.615869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.615954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.615981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.616094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.616157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.616337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.616383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.616499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.616527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.616642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.616670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.616795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.616835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.616931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.616960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.617081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.617139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.617255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.617283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.617402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.617431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.617525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.617552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.617651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.617699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.617794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.617823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.617908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.617933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.618045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.618072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.618280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.618307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.618605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.618660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.618758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.618785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.618874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.618906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.619015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.619043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.619156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.619183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.619271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.619300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.619425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.619453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.130 [2024-11-19 03:16:42.619609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.130 [2024-11-19 03:16:42.619649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.130 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.619750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.619781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.619881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.619909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.619999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.620026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.620126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.620193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.620389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.620445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.620563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.620590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.620685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.620722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.620842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.620871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.621035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.621105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.621293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.621357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.621598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.621663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.621868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.621895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.622003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.622029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.622160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.622222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.622524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.622599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.622801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.622828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.622939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.622967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.623110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.623164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.623323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.623350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.623530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.623582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.623696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.623724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.623828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.623861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.623956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.623985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.624095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.624179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.624355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.624410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.624500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.624528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.624670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.624704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.624824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.624851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.624963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.624990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.625064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.625089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.625263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.625311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.625396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.625422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.625544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.625572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.625722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.625751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.625839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.625865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.626015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.626042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.626158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.626185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.626304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.626331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.626412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.626437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.626531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.626558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.626683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.131 [2024-11-19 03:16:42.626715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.131 qpair failed and we were unable to recover it. 00:35:32.131 [2024-11-19 03:16:42.626809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.626836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.626979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.627006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.627119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.627180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.627319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.627346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.627445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.627487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.627608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.627637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.627757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.627786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.627945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.627998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.628214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.628261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.628368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.628434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.628550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.628578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.628697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.628724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.628865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.628893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.629072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.629122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.629253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.629302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.629381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.629406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.629498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.629526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.629628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.629669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.629804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.629833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.630027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.630078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.630273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.630333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.630428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.630456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.630574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.630601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.630724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.630753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.630846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.630873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.630988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.631017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.631095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.631121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.631206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.631233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.631337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.631364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.631488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.631517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.631609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.631636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.631724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.631752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.631869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.631897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.632010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.632037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.632132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.632158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.632297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.632344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.632487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.132 [2024-11-19 03:16:42.632514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.132 qpair failed and we were unable to recover it. 00:35:32.132 [2024-11-19 03:16:42.632592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.632618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.632702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.632728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.632824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.632851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.632935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.632961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.633046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.633073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.633156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.633186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.633269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.633298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.633386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.633413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.633570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.633611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.633706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.633746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.633841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.633870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.633993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.634020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.634136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.634163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.634247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.634274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.634425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.634453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.634617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.634658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.634796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.634826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.634914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.634941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.635028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.635056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.635235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.635262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.635485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.635553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.635703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.635733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.635864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.635895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.636067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.636147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.636491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.636557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.636781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.636809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.636900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.636927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.637086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.637139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.637353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.637415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.637585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.637611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.637705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.637730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.637820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.637844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.637961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.637988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.638070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.638094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.638263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.638330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.638566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.638612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.638744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.638771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.638856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.638881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.638971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.638997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.639089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.639117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.639256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.639321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.133 qpair failed and we were unable to recover it. 00:35:32.133 [2024-11-19 03:16:42.639498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.133 [2024-11-19 03:16:42.639575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.639756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.639784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.639875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.639901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.640012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.640038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.640124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.640148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.640311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.640369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.640543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.640621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.640783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.640810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.640893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.640917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.640996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.641074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.641354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.641419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.641638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.641679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.641842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.641882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.642017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.642072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.642191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.642219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.642327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.642354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.642481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.642527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.642646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.642675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.642774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.642805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.642925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.642954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.643094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.643172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.643367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.643421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.643566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.643593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.643713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.643742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.643828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.643855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.643977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.644004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.644180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.644209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.644351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.644378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.644494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.644523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.644637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.644665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.644753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.644780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.644892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.644920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.645045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.645073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.645162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.645203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.645293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.645322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.645454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.645495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.645624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.645652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.645771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.645798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.645914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.645942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.134 [2024-11-19 03:16:42.646040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.134 [2024-11-19 03:16:42.646068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.134 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.646188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.646215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.646336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.646365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.646454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.646481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.646582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.646612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.646761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.646790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.646888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.646915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.647044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.647091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.647173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.647199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.647328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.647368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.647505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.647534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.647658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.647685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.647787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.647814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.647931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.647958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.648049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.648077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.648163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.648192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.648334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.648361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.648476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.648504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.648629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.648655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.648769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.648796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.648879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.648907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.649020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.649047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.649140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.649172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.649289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.649317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.649468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.649497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.649586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.649613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.649731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.649759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.649875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.649902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.649991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.650019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.650134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.650161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.650283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.650312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.650397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.650424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.650522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.650562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.135 [2024-11-19 03:16:42.650661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.135 [2024-11-19 03:16:42.650699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.135 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.650793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.650820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.650980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.651007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.651155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.651224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.651473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.651535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.651654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.651683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.651835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.651863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.651948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.651973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.652166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.652238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.652397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.652462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.652581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.652620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.652736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.652761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.652843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.652867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.652950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.653011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.653245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.653311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.653462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.653511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.653729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.653780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.653916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.653942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.654038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.654064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.654335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.654398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.654551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.654576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.654718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.654744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.654837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.654864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.654967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.655008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.655156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.655185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.655287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.655328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.655492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.655546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.655655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.655683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.655810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.655837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.655937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.655965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.656063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.656103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.656222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.656255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.656438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.656490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.656630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.656657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.656790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.656818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.656911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.656939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.657032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.657058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.657174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.657200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.657314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.657342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.657424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.657449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.136 qpair failed and we were unable to recover it. 00:35:32.136 [2024-11-19 03:16:42.657577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.136 [2024-11-19 03:16:42.657618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.657728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.657758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.657846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.657874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.657956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.657982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.658090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.658118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.658210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.658236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.658327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.658353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.658497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.658524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.658604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.658630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.658743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.658771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.658864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.658891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.659006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.659033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.659116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.659141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.659280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.659308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.659427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.659457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.659576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.659605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.659703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.659733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.659817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.659843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.659930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.659959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.660105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.660132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.660333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.660382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.660473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.660501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.660584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.660609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.660726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.660756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.660912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.660953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.661072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.661099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.661246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.661312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.661601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.661666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.661846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.661875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.661964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.661989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.662143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.662192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.662329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.662405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.662524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.662552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.662684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.662734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.662855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.662884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.663008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.663036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.663119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.663145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.663321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.663377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.137 [2024-11-19 03:16:42.663517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.137 [2024-11-19 03:16:42.663545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.137 qpair failed and we were unable to recover it. 00:35:32.431 [2024-11-19 03:16:42.663683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.431 [2024-11-19 03:16:42.663739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.431 qpair failed and we were unable to recover it. 00:35:32.431 [2024-11-19 03:16:42.663835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.431 [2024-11-19 03:16:42.663864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.431 qpair failed and we were unable to recover it. 00:35:32.431 [2024-11-19 03:16:42.664051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.431 [2024-11-19 03:16:42.664108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.431 qpair failed and we were unable to recover it. 00:35:32.431 [2024-11-19 03:16:42.664227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.431 [2024-11-19 03:16:42.664294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.431 qpair failed and we were unable to recover it. 00:35:32.431 [2024-11-19 03:16:42.664460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.431 [2024-11-19 03:16:42.664513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.431 qpair failed and we were unable to recover it. 00:35:32.431 [2024-11-19 03:16:42.664655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.431 [2024-11-19 03:16:42.664684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.431 qpair failed and we were unable to recover it. 00:35:32.431 [2024-11-19 03:16:42.664780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.431 [2024-11-19 03:16:42.664807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.431 qpair failed and we were unable to recover it. 00:35:32.431 [2024-11-19 03:16:42.664916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.664944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.665026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.665052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.665160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.665188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.665305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.665333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.665461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.665490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.665603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.665631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.665746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.665775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.665860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.665887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.665968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.665995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.666084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.666110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.666206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.666235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.666354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.666382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.666542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.666582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.666678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.666714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.666842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.666869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.666960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.666987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.667159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.667210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.667363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.667416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.667526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.667566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.667658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.667684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.667805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.667832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.667919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.667947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.668052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.668080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.668190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.668217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.668302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.668329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.668404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.668435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.668577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.668604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.668719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.668746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.668830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.668858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.668935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.668961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.669050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.669079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.669198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.432 [2024-11-19 03:16:42.669227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.432 qpair failed and we were unable to recover it. 00:35:32.432 [2024-11-19 03:16:42.669357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.669385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.669498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.669525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.669644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.669671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.669832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.669871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.669970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.669997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.670137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.670200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.670490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.670555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.670755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.670785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.670900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.670927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.671092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.671143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.671302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.671353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.671446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.671474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.671595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.671625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.671752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.671793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.671896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.671924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.672090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.672145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.672369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.672419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.672531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.672557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.672650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.672677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.672767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.672793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.672878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.672912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.673109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.673154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.673329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.673375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.673578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.673603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.673722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.673750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.673860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.673886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.674024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.674051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.674221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.674286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.674577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.674618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.674735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.674764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.674852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.674879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.674967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.674992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.675098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.675125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.433 [2024-11-19 03:16:42.675206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.433 [2024-11-19 03:16:42.675235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.433 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.675356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.675384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.675533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.675560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.675673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.675707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.675791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.675818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.675937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.675963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.676044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.676068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.676147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.676217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.676509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.676568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.676718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.676747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.676860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.676888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.677007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.677036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.677121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.677148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.677334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.677401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.677486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.677514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.677605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.677632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.677751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.677780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.677926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.677954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.678116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.678167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.678248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.678274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.678384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.678411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.678496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.678525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.678609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.678636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.678727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.678753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.678874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.678902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.678996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.679066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.679181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.679207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.679372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.679431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.679514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.679541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.679654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.679681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.679777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.679802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.679894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.679922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.680042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.680069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.680198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.434 [2024-11-19 03:16:42.680226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.434 qpair failed and we were unable to recover it. 00:35:32.434 [2024-11-19 03:16:42.680356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.680386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.680505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.680532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.680641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.680668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.680760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.680786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.680866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.680893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.681022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.681090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.681243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.681270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.681395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.681422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.681529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.681555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.681639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.681663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.681788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.681814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.681898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.681924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.682093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.682153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.682263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.682290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.682406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.682432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.682547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.682574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.682694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.682722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.682848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.682888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.683051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.683110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.683291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.683348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.683464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.683491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.683602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.683629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.683745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.683772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.683888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.683916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.684058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.684086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.684169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.684201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.684324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.684353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.684440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.684467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.684576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.684604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.435 [2024-11-19 03:16:42.684698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.435 [2024-11-19 03:16:42.684726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.435 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.684816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.684842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.684954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.684980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.685069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.685098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.685184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.685217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.685339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.685380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.685526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.685554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.685699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.685726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.685812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.685839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.685958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.686040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.686257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.686321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.686562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.686591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.686679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.686711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.686805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.686831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.686918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.686944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.687036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.687063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.687150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.687174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.687317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.687344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.687428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.687453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.687593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.687620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.687743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.687772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.687889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.687916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.688052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.688078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.688311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.688377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.688543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.688583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.688715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.688755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.688873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.688902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.689011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.689037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.689241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.689267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.689355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.689383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.689466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.689492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.689653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.689708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.689803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.689831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.689944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.436 [2024-11-19 03:16:42.689971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.436 qpair failed and we were unable to recover it. 00:35:32.436 [2024-11-19 03:16:42.690082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.690151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.690241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.690268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.690454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.690481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.690599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.690626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.690740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.690769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.690856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.690888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.690985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.691014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.691156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.691183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.691302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.691329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.691447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.691476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.691634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.691674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.691789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.691819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.691961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.691988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.692157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.692218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.692382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.692435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.692561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.692590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.692707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.692737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.692822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.692848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.692935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.692963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.693119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.693172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.693345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.693394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.693514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.693550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.693665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.693701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.693854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.693881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.693980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.694008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.694118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.694145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.694252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.694279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.694367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.694396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.694519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.694559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.437 [2024-11-19 03:16:42.694704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.437 [2024-11-19 03:16:42.694735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.437 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.694873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.694899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.694984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.695009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.695096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.695122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.695203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.695229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.695341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.695369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.695470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.695509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.695606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.695636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.695761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.695794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.695910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.695937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.696093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.696121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.696233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.696260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.696373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.696400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.696521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.696548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.696697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.696726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.696843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.696870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.696952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.696977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.697139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.697196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.697366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.697425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.697547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.697574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.697684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.697718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.697804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.697831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.698010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.698064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.698292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.698345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.698458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.698485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.698627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.698654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.698750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.698784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.698902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.698929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.699039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.699067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.699182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.699248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.699371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.699434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.699518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.699545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.699634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.699663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.699791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.438 [2024-11-19 03:16:42.699820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.438 qpair failed and we were unable to recover it. 00:35:32.438 [2024-11-19 03:16:42.699912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.699940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.700033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.700060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.700180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.700207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.700296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.700322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.700429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.700456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.700573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.700602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.700755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.700795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.700914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.700943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.701088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.701116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.701226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.701286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.701369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.701395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.701470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.701496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.701604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.701632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.701778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.701817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.701945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.701978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.702068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.702096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.702220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.702247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.702373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.702400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.702544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.702572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.702685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.702719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.702805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.702832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.702962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.703002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.703126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.703154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.703266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.703292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.703470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.703524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.703630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.703656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.703777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.703805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.703910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.703937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.704025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.704052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.704141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.704168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.704277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.439 [2024-11-19 03:16:42.704303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.439 qpair failed and we were unable to recover it. 00:35:32.439 [2024-11-19 03:16:42.704392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.704417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.704514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.704555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.704708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.704737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.704856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.704884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.704999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.705026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.705144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.705171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.705295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.705335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.705429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.705457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.705614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.705654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.705765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.705794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.705905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.705936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.706118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.706174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.706349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.706408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.706525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.706562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.706682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.706717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.706862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.706890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.706981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.707008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.707172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.707243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.707351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.707377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.707516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.707543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.707660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.707687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.707793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.707820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.707960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.707987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.708164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.708226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.708368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.708397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.708478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.708503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.708601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.708628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.708752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.708796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.708948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.708977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.709087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.709114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.709227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.709286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.709499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.709527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.440 [2024-11-19 03:16:42.709675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.440 [2024-11-19 03:16:42.709712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.440 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.709831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.709859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.709971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.709999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.710145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.710171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.710286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.710326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.710457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.710485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.710604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.710634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.710720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.710748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.710862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.710889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.711085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.711141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.711261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.711319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.711516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.711570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.711679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.711712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.711829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.711856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.711972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.712000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.712143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.712170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.712286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.712314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.712423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.712451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.712571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.712601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.712727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.712768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.712900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.712941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.713058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.713086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.713226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.713252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.713334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.713359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.713539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.713565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.713686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.713719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.713858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.713886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.714025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.714105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.714272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.714317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.714489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.714546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.441 [2024-11-19 03:16:42.714660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.441 [2024-11-19 03:16:42.714687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.441 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.714837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.714868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.714959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.714986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.715066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.715092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.715264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.715317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.715405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.715433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.715557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.715585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.715707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.715734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.715819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.715844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.715919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.715945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.716044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.716113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.716255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.716282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.716363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.716387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.716542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.716582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.716703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.716733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.716856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.716885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.717098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.717161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.717322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.717376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.717517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.717544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.717659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.717686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.717811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.717839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.717918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.717943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.718051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.718078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.718187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.718214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.718299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.718325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.718435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.442 [2024-11-19 03:16:42.718463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.442 qpair failed and we were unable to recover it. 00:35:32.442 [2024-11-19 03:16:42.718544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.718570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.718680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.718713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.718825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.718860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.718957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.718984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.719129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.719156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.719273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.719300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.719435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.719476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.719620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.719660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.719781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.719810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.719890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.719916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.720125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.720184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.720373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.720422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.720509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.720537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.720626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.720657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.720796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.720837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.720924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.720956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.721105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.721169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.721338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.721412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.721627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.721653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.721746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.721773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.721859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.721885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.721970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.721996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.722156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.722218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.722491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.722554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.722712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.722739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.722880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.722907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.722996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.723021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.723166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.723195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.723328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.723385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.723502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.443 [2024-11-19 03:16:42.723530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.443 qpair failed and we were unable to recover it. 00:35:32.443 [2024-11-19 03:16:42.723612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.723638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.723766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.723791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.723874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.723899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.724040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.724067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.724180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.724208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.724319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.724347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.724442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.724471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.724602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.724643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.724772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.724803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.724920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.724947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.725120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.725173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.725260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.725286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.725404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.725461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.725547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.725575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.725708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.725750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.725889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.725918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.726081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.726132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.726222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.726250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.726334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.726361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.726527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.726554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.726673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.726706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.726824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.726851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.726985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.727055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.727163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.727190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.727303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.727328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.727471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.727501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.727635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.727676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.727775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.727802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.727920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.727948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.728084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.728152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.728337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.728394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.444 [2024-11-19 03:16:42.728514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.444 [2024-11-19 03:16:42.728541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.444 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.728683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.728720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.728805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.728831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.728943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.728970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.729130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.729175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.729348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.729414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.729488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.729514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.729633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.729660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.729783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.729835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.729934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.729961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.730078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.730106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.730257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.730307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.730390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.730418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.730532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.730560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.730704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.730732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.730827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.730853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.730950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.730989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.731217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.731280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.731476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.731504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.731647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.731675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.731829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.731859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.732052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.732103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.732214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.732275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.732406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.732471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.732587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.732615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.732732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.732760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.732880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.732905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.733022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.733049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.733169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.733195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.733303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.733330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.733486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.733526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.733672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.733707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.445 qpair failed and we were unable to recover it. 00:35:32.445 [2024-11-19 03:16:42.733821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.445 [2024-11-19 03:16:42.733847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.733926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.733952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.734081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.734122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.734295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.734359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.734511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.734540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.734631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.734660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.734758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.734786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.734874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.734903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.734997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.735025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.735266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.735331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.735504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.735531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.735636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.735662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.735766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.735795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.735985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.736014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.736207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.736261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.736445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.736509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.736655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.736698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.736818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.736858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.736955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.736982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.737220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.737275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.737423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.737482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.737596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.737624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.737718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.737745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.737860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.737887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.738035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.738089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.738264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.738314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.738428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.738456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.738581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.738608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.738698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.738724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.738813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.738841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.738924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.446 [2024-11-19 03:16:42.738951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.446 qpair failed and we were unable to recover it. 00:35:32.446 [2024-11-19 03:16:42.739065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.739092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.739206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.739233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.739332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.739372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.739471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.739500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.739630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.739672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.739811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.739840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.739971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.739999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.740134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.740179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.740272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.740301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.740414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.740441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.740521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.740548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.740633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.740661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.740754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.740785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.740906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.740933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.741114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.741141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.741311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.741386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.741591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.741618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.741704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.741730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.741841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.741868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.741961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.741990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.742103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.742141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.742319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.742380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.742489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.742520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.742648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.742700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.742841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.742883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.743001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.743029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.743221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.743287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.743522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.743568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.743709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.743736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.743826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.743854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.743951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.743978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.744209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.744257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.447 qpair failed and we were unable to recover it. 00:35:32.447 [2024-11-19 03:16:42.744457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.447 [2024-11-19 03:16:42.744485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.744575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.744601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.744713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.744742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.744851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.744878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.745016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.745057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.745215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.745245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.745442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.745470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.745590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.745618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.745747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.745790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.745971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.746020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.746179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.746248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.746413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.746467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.746609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.746647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.746802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.746830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.746950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.746977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.747149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.747176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.747396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.747450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.747558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.747599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.747719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.747746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.747863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.747898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.748075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.748107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.748292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.748346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.748460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.748488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.748580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.748607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.748700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.748727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.748847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.748874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.749025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.749081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.749164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.448 [2024-11-19 03:16:42.749190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.448 qpair failed and we were unable to recover it. 00:35:32.448 [2024-11-19 03:16:42.749340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.749367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.749478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.749506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.749592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.749618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.749746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.749773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.749870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.749897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.750000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.750041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.750172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.750202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.750321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.750351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.750490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.750517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.750649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.750693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.750848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.750876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.750992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.751084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.751264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.751320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.751462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.751488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.751608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.751635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.751757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.751786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.751898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.751925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.752043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.752070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.752178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.752205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.752337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.752377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.752494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.752523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.752630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.752658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.752784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.752813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.752904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.752932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.753046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.753073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.753193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.753222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.753342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.753369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.753456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.753485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.753569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.753596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.753712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.753740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.753857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.753884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.754040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.754093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.449 qpair failed and we were unable to recover it. 00:35:32.449 [2024-11-19 03:16:42.754310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.449 [2024-11-19 03:16:42.754361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.754450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.754477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.754595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.754625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.754728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.754768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.754865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.754893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.755036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.755100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.755318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.755384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.755589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.755616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.755709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.755736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.755823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.755849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.755947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.755974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.756099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.756153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.756276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.756315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.756456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.756495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.756616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.756642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.756761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.756798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.756909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.756934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.757047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.757074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.757157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.757182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.757305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.757332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.757490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.757531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.757634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.757662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.757789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.757819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.757913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.757942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.758026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.758053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.758168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.758196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.758341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.758404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.758520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.758550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.758644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.758672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.758799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.758828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.758915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.758942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.759101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.759164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.759355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.759383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.450 qpair failed and we were unable to recover it. 00:35:32.450 [2024-11-19 03:16:42.759522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.450 [2024-11-19 03:16:42.759554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.759696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.759725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.759813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.759842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.759922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.759948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.760056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.760117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.760199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.760224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.760336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.760364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.760476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.760511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.760654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.760682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.760811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.760850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.760947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.760972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.761087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.761114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.761214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.761242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.761320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.761345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.761462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.761489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.761577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.761602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.761730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.761771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.761890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.761919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.762030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.762057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.762196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.762223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.762341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.762368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.762487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.762514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.762612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.762640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.762754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.762782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.762913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.762954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.763074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.763102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.763212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.763239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.763354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.763382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.451 [2024-11-19 03:16:42.763521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.451 [2024-11-19 03:16:42.763549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.451 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.763672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.763718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.763807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.763834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.763928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.763957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.764172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.764226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.764404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.764466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.764554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.764588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.764706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.764734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.764846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.764874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.765015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.765042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.765122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.765149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.765346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.765409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.765520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.765549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.765700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.765729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.765852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.765883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.766009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.766036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.766246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.766272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.766558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.766586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.766729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.766758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.766850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.766877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.766976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.767004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.767178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.767252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.767410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.767470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.767564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.767605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.767722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.767751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.767843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.767870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.767967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.767995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.768075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.768138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.768390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.768416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.768589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.768614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.768732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.768759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.768845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.768869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.768958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.768988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.769118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.769159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.769329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.769399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.769511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.769549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.452 qpair failed and we were unable to recover it. 00:35:32.452 [2024-11-19 03:16:42.769674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.452 [2024-11-19 03:16:42.769709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.769830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.769866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.769983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.770010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.770123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.770162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.770285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.770312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.770428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.770456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.770531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.770557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.770682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.770732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.770825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.770853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.770957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.770997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.771186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.771220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.771444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.771502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.771614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.771642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.771740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.771769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.771861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.771890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.772004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.772071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.772182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.772209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.772295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.772321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.772433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.772461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.772575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.772604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.772721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.772750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.772866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.772894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.772980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.773006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.773128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.453 [2024-11-19 03:16:42.773156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.453 qpair failed and we were unable to recover it. 00:35:32.453 [2024-11-19 03:16:42.773249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.773277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.773367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.773396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.773506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.773535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.773708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.773749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.773844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.773872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.773988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.774015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.774091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.774116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.774310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.774375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.774552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.774580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.774685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.774732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.774859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.774889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.774973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.774999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.775114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.775142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.775372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.775433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.775554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.775582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.775697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.775727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.775855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.775886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.776048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.776125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.776295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.776348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.776461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.776488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.776570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.776596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.776711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.776739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.776890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.776918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.777001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.777027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.777113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.777140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.777260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.777289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.777403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.777432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.777557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.777589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.777708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.777736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.777855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.777883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.777987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.778027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.778373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.778442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.454 qpair failed and we were unable to recover it. 00:35:32.454 [2024-11-19 03:16:42.778620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.454 [2024-11-19 03:16:42.778648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.778773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.778801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.778887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.778913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.779104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.779170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.779459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.779486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.779631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.779658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.779774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.779800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.779884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.779910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.780007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.780037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.780253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.780308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.780539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.780596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.780723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.780750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.780836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.780863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.781058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.781112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.781254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.781325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.781527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.781567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.781658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.781687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.781818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.781858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.782028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.782082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.782175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.782201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.782310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.782337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.782433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.782464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.782579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.782606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.782703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.782731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.782820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.782846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.782929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.782954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.783095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.783122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.783211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.783239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.783382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.783408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.783497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.783521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.783636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.783662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.455 [2024-11-19 03:16:42.783761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.455 [2024-11-19 03:16:42.783788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.455 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.783932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.783959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.784217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.784272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.784361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.784388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.784522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.784563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.784715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.784745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.784865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.784893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.785004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.785032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.785148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.785176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.785257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.785283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.785377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.785405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.785546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.785574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.785653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.785679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.785771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.785800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.785897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.785923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.786039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.786066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.786182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.786209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.786335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.786379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.786482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.786522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.786642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.786671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.786771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.786800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.786941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.786968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.787193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.787247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.787368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.787396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.787535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.787561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.787681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.787713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.787802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.787833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.787921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.787946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.788103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.788155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.456 qpair failed and we were unable to recover it. 00:35:32.456 [2024-11-19 03:16:42.788266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.456 [2024-11-19 03:16:42.788293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.788404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.788443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.788550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.788590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.788676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.788713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.788813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.788841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.788928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.788955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.789038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.789065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.789186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.789215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.789307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.789335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.789446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.789473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.789558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.789586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.789668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.789703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.789832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.789873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.789992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.790058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.790273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.790332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.790464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.790527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.790639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.790667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.790790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.790839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.790944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.790973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.791092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.791119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.791196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.791222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.791337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.791364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.791454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.791480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.791570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.791598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.791685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.791724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.791813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.791839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.791927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.791955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.792050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.792078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.792198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.792230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.792350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.792378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.792507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.792534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.792644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.792671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.792764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.792790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.457 [2024-11-19 03:16:42.792870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.457 [2024-11-19 03:16:42.792896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.457 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.793034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.793061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.793145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.793171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.793301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.793341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.793440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.793480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.793575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.793602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.793682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.793713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.793798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.793823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.793904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.793932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.794030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.794057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.794171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.794199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.794320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.794350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.794441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.794468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.794573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.794602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.794720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.794748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.794864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.794891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.795067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.795133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.795298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.795363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.795541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.795568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.795684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.795717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.795831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.795860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.796091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.796148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.796316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.796377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.796544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.796598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.796718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.796745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.796835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.796861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.796948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.796973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.797069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.797096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.797323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.797384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.797538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.797567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.797686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.797721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.797834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.797861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.458 qpair failed and we were unable to recover it. 00:35:32.458 [2024-11-19 03:16:42.797950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.458 [2024-11-19 03:16:42.797976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.798102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.798156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.798274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.798301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.798382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.798414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.798561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.798589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.798680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.798715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.798805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.798830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.798925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.798951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.799051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.799091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.799181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.799210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.799330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.799358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.799480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.799507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.799597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.799624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.799737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.799765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.799883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.799911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.799996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.800021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.800115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.800144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.800263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.800290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.800403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.800430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.800539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.800566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.800652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.800679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.800815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.800847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.800972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.801000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.801084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.801115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.801204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.801233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.801378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.801406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.801494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.801524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.801622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-11-19 03:16:42.801650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.459 qpair failed and we were unable to recover it. 00:35:32.459 [2024-11-19 03:16:42.801792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.801819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.801903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.801928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.802045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.802076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.802193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.802221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.802364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.802391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.802505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.802533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.802649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.802678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.802800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.802828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.802943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.802970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.803138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.803198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.803373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.803400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.803484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.803512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.803603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.803632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.803751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.803779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.803866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.803892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.803977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.804004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.804121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.804148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.804290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.804317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.804433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.804460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.804598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.804625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.804742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.804772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.804857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.804884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.805045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.805085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.805173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.805201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.805352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.805378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.805463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.805487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.805574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.805603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.805720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.805748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.805841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.805870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.805963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.805992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.806102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-11-19 03:16:42.806165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.460 qpair failed and we were unable to recover it. 00:35:32.460 [2024-11-19 03:16:42.806247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.806276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.806511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.806539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.806670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.806726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.806854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.806880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.807004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.807031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.807194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.807249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.807395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.807433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.807519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.807546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.807664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.807700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.807818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.807849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.807996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.808052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.808236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.808295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.808435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.808462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.808570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.808597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.808716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.808744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.808863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.808891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.809117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.809179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.809300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.809370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.809513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.809540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.809656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.809683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.809783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.809810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.809928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.809955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.810069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.810096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.810195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.810222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.810305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.810332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.810456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.810484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.810625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.810653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.810760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.810788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.810930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.810958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.811077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.811104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.811189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.811216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.811338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.811367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.811481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.811509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.461 [2024-11-19 03:16:42.811592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-11-19 03:16:42.811618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.461 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.811729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.811757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.811842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.811868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.811970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.811999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.812119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.812146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.812320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.812360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.812492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.812520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.812636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.812665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.812772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.812800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.812883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.812908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.813046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.813113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.813205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.813232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.813321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.813350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.813492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.813520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.813606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.813636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.813727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.813754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.813852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.813879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.814067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.814132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.814292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.814369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.814559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.814587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.814675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.814710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.814819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.814846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.814937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.814990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.815143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.815201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.815525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.815589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.815795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.815824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.815912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.815939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.816082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.816119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.816293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.816350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.816493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.816530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.816683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.816718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.816836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.816865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.816987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.462 [2024-11-19 03:16:42.817014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.462 qpair failed and we were unable to recover it. 00:35:32.462 [2024-11-19 03:16:42.817127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.817207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.817446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.817511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.817745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.817786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.817892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.817921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.818093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.818147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.818261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.818290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.818396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.818438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.818539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.818568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.818712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.818752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.818857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.818885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.818976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.819002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.819121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.819148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.819350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.819432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.819624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.819664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.819795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.819825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.819964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.820029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.820160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.820222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.820339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.820397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.820514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.820542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.820634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.820660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.820757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.820785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.820902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.820929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.821021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.821048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.821192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.821219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.821314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.821341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.821458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.821485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.821606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.821634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.821723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.821753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.821876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.821916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.822011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.822039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.822164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.822193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.822344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.822408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.822595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.822635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.822733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.822763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.463 [2024-11-19 03:16:42.822885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.463 [2024-11-19 03:16:42.822942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.463 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.823119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.823169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.823392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.823450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.823536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.823562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.823674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.823707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.823840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.823873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.823988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.824015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.824159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.824186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.824296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.824323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.824461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.824488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.824629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.824669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.824786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.824827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.824951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.824981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.825103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.825130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.825273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.825333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.825411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.825437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.825551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.825578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.825701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.825730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.825840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.825867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.826004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.826033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.826160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.826187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.826301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.826329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.826413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.826440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.826557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.826584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.826672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.826708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.826804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.826832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.826939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.464 [2024-11-19 03:16:42.827004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.464 qpair failed and we were unable to recover it. 00:35:32.464 [2024-11-19 03:16:42.827109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.827164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.827312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.827382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.827525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.827552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.827657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.827684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.827777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.827803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.827919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.827946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.828033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.828060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.828174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.828203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.828331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.828371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.828494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.828523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.828614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.828640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.828786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.828813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.828902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.828929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.829046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.829074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.829166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.829194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.829332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.829360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.829470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.829498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.829629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.829656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.829787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.829823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.829921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.829962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.830139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.830193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.830372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.830431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.830571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.830599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.830721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.830762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.830881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.830909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.831071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.831125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.831206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.831232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.831405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.831461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.831543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.831569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.831647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.831672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.831766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.831792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.831913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.831951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.832043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.832069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.832180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.832208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.832330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.465 [2024-11-19 03:16:42.832356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.465 qpair failed and we were unable to recover it. 00:35:32.465 [2024-11-19 03:16:42.832443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.832473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.832585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.832614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.832728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.832765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.832910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.832938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.833059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.833086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.833196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.833222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.833409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.833464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.833549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.833576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.833665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.833700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.833825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.833852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.833997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.834059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.834193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.834247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.834359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.834387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.834478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.834506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.834602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.834630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.834775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.834803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.834932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.834973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.835171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.835228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.835369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.835443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.835558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.835586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.835703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.835732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.835841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.835870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.836015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.836042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.836127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.836160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.836247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.836275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.836366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.836393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.836535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.836562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.836658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.836707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.836793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.836821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.836934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.836961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.837103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.837131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.837245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.837273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.837378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.466 [2024-11-19 03:16:42.837419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.466 qpair failed and we were unable to recover it. 00:35:32.466 [2024-11-19 03:16:42.837567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.837596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.837715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.837743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.837825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.837851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.838010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.838070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.838265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.838305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.838401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.838430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.838545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.838572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.838683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.838717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.838829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.838857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.838949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.838976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.839132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.839187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.839348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.839411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.839525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.839553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.839671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.839706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.839820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.839848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.839935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.839963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.840052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.840080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.840221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.840249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.840360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.840386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.840527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.840553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.840669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.840705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.840796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.840822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.840946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.840974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.841108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.841174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.841255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.841281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.841403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.841429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.841550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.841577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.841720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.841761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.841962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.467 [2024-11-19 03:16:42.842016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.467 qpair failed and we were unable to recover it. 00:35:32.467 [2024-11-19 03:16:42.842173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.842227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.842311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.842337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.842457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.842486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.842601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.842628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.842713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.842739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.842817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.842842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.842967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.843032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.843195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.843259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.843406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.843489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.843650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.843679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.843810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.843839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.843954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.843982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.844129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.844155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.844314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.844371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.844487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.844517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.844609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.844636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.844742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.844768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.844882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.844908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.845077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.845132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.845371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.845415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.845503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.845529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.845610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.845635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.845751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.845778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.845854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.845879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.846048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.846101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.846192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.846220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.846377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.846434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.846518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.846550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.846668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.846705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.846797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.846824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.846937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.846964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.847048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.468 [2024-11-19 03:16:42.847073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.468 qpair failed and we were unable to recover it. 00:35:32.468 [2024-11-19 03:16:42.847173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.847201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.847317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.847344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.847427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.847453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.847569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.847608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.847704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.847732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.847815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.847839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.847929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.847955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.848073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.848099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.848208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.848235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.848377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.848406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.848539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.848567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.848710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.848752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.848918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.848945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.849076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.849141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.849293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.849346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.849575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.849639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.849793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.849819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.849939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.849966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.850084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.850111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.850237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.850278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.850402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.850466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.850578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.850606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.850723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.850751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.850842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.850873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.851017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.851044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.851160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.851187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.851310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.851340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.851493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.851533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.851656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.469 [2024-11-19 03:16:42.851683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.469 qpair failed and we were unable to recover it. 00:35:32.469 [2024-11-19 03:16:42.851772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.851797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.851884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.851909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.852022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.852102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.852312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.852371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.852492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.852532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.852631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.852658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.852782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.852811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.852922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.852984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.853078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.853105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.853219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.853247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.853325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.853351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.853448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.853480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.853593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.853621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.853710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.853735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.853858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.853886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.853996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.854022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.854104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.854129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.854305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.854360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.854490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.854520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.854665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.854701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.854793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.854819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.854901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.854932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.855043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.855072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.855295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.855353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.855437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.855464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.855546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.855574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.855658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.855684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.855782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.855810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.855965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.856007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.856135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.856164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.856336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.856389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.856478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.856506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.470 [2024-11-19 03:16:42.856622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.470 [2024-11-19 03:16:42.856649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.470 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.856770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.856800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.856944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.856971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.857097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.857124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.857205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.857231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.857341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.857368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.857484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.857511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.857620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.857648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.857774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.857801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.857912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.857939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.858064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.858091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.858205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.858232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.858321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.858348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.858435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.858462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.858573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.858603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.858684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.858714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.858826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.858853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.858937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.858963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.859069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.859096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.859183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.859211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.859299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.859326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.859419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.859451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.859548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.859589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.859709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.859737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.859822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.859847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.859959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.859986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.860098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.860155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.860312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.860386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.860530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.860556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.860699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.860743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.860868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.860896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.860997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.861024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.471 qpair failed and we were unable to recover it. 00:35:32.471 [2024-11-19 03:16:42.861105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.471 [2024-11-19 03:16:42.861132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.861251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.861279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.861350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.861376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.861489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.861515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.861624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.861650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.861778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.861805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.861885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.861911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.861997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.862022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.862165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.862230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.862394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.862462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.862643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.862670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.862786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.862815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.862925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.862957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.863051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.863079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.863229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.863289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.863379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.863406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.863513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.863541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.863618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.863645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.863762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.863803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.863941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.863983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.864217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.864270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.864435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.864488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.864571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.864597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.864684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.864718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.864825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.864857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.864974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.865002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.865084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.865110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.865191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.865217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.865329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.865356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.865439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.865465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.865553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.472 [2024-11-19 03:16:42.865581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.472 qpair failed and we were unable to recover it. 00:35:32.472 [2024-11-19 03:16:42.865677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.865718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.865848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.865877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.865967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.865993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.866109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.866137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.866224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.866252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.866338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.866366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.866475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.866515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.866603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.866633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.866749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.866777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.866894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.866922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.867021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.867048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.867163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.867190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.867305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.867333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.867429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.867459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.867553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.867582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.867663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.867697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.867789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.867815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.867934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.867969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.868125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.868153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.868271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.868301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.868394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.868422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.868519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.868548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.868702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.868741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.868824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.868851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.868932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.868958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.869049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.869078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.869216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.869314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.869530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.869558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.869703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.869736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.869843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.869871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.870016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.870043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.870122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.870148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.473 [2024-11-19 03:16:42.870262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.473 [2024-11-19 03:16:42.870289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.473 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.870400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.870433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.870558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.870587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.870703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.870750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.870852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.870892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.871014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.871079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.871307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.871374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.871684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.871718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.871815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.871843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.871934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.871969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.872080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.872107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.872229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.872257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.872364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.872404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.872521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.872550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.872665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.872700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.872829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.872857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.872951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.872989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.873102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.873129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.873238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.873265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.873353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.873381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.873467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.873494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.873580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.873608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.873746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.873787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.873913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.873943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.874063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.874092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.874260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.874314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.874534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.874594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.874706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.874738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.874830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.874859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.874969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.875010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.875123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.875189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.875367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.875432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.875592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.875619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.875745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.474 [2024-11-19 03:16:42.875775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.474 qpair failed and we were unable to recover it. 00:35:32.474 [2024-11-19 03:16:42.875861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.875888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.876056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.876109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.876245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.876300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.876416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.876446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.876557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.876598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.876696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.876730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.876845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.876894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.877190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.877217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.877408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.877472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.877608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.877644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.877775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.877800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.877919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.877948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.878091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.878154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.878322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.878380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.878552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.878605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.878697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.878735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.878847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.878874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.878960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.878985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.879213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.879270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.879417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.879467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.879547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.879576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.879668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.879700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.879821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.879848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.879938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.879968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.880084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.880113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.880201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.880229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.880409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.880464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.880578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.475 [2024-11-19 03:16:42.880605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.475 qpair failed and we were unable to recover it. 00:35:32.475 [2024-11-19 03:16:42.880724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.880750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.880843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.880871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.880961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.880994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.881114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.881142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.881251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.881278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.881393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.881421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.881578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.881623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.881726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.881754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.881870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.881898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.882020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.882047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.882163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.882191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.882365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.882420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.882539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.882567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.882701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.882739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.882885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.882912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.882993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.883019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.883160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.883187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.883317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.883374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.883496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.883523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.883618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.883645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.883782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.883811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.883928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.883966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.884082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.884109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.884226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.884254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.884369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.884398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.884502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.884542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.884665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.884702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.884828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.884856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.884933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.884961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.885105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.885133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.476 [2024-11-19 03:16:42.885243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.476 [2024-11-19 03:16:42.885270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.476 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.885357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.885385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.885501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.885532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.885639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.885678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.885810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.885837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.885946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.885972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.886062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.886128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.886423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.886488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.886675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.886708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.886798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.886825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.887014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.887069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.887203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.887248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.887334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.887361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.887447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.887475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.887607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.887647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.887750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.887779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.887898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.887931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.888090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.888144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.888259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.888319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.888403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.888431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.888550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.888578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.888686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.888742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.888860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.888888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.889002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.889028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.889203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.889270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.889492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.889557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.889706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.889744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.889834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.889860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.889940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.889969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.890171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.890235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.890433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.890499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.890669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.890704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.477 qpair failed and we were unable to recover it. 00:35:32.477 [2024-11-19 03:16:42.890808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.477 [2024-11-19 03:16:42.890834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.890945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.890972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.891054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.891080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.891278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.891305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.891483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.891564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.891774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.891815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.891935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.891969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.892094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.892151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.892296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.892353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.892467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.892503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.892651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.892682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.892814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.892846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.892964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.892992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.893071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.893096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.893179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.893207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.893303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.893330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.893441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.893468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.893583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.893609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.893738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.893764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.893904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.893931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.894047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.894073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.894315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.894380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.894662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.894753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.894871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.894897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.895037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.895063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.895184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.895251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.895488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.895515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.895716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.895743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.895830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.895856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.895944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.895969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.896100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.896140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.896223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.896250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.896469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.896524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.896636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.896663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.896796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.896837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.896970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.897011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.897126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.897154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.897342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.478 [2024-11-19 03:16:42.897398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.478 qpair failed and we were unable to recover it. 00:35:32.478 [2024-11-19 03:16:42.897486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.897519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.897639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.897666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.897819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.897849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.897966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.897993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.898083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.898110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.898183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.898209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.898300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.898330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.898431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.898471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.898565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.898593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.898686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.898720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.898803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.898834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.899071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.899136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.899318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.899382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.899532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.899559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.899711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.899741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.899827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.899855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.899983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.900013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.900184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.900211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.900329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.900385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.900500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.900528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.900628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.900655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.900782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.900809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.900920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.900947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.901053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.901079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.901204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.901230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.901325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.901354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.901431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.901459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.901552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.901585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.901709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.901738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.901854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.901882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.901969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.902000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.902130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.902158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.902274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.902302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.902428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.902469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.902585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.902614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.902699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.902727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.902835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.902863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.902953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.902980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.903066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.903091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.903279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.903307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.903387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.479 [2024-11-19 03:16:42.903418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.479 qpair failed and we were unable to recover it. 00:35:32.479 [2024-11-19 03:16:42.903550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.903591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.903750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.903780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.903895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.903923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.904120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.904181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.904319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.904368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.904486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.904518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.904634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.904662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.904813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.904862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.904999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.905029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.905114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.905141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.905255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.905283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.905404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.905430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.905516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.905543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.905660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.905695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.905804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.905831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.905956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.905996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.906116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.906145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.906234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.906262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.906378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.906405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.906498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.906526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.906626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.906666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.906795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.906823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.906945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.906975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.907097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.907125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.907272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.907300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.907420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.907447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.907562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.907594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.907717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.907747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.907851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.907891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.907994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.908023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.908110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.908138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.908298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.908350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.908492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.908519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.908662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.908695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.908841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.908868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.908968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.909058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.909236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.909318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.480 qpair failed and we were unable to recover it. 00:35:32.480 [2024-11-19 03:16:42.909480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.480 [2024-11-19 03:16:42.909509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.909630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.909657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.909783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.909811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.909934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.909962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.910058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.910086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.910179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.910207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.910321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.910349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.910547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.910574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.910658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.910685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.910807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.910835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.910949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.910977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.911087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.911114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.911189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.911215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.911346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.911403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.911529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.911570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.911667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.911705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.911794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.911822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.911962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.911989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.912101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.912127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.912248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.912274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.912415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.912442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.912555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.912582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.912664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.912703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.912815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.912842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.912923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.912957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.913095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.913123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.913233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.913259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.913338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.913363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.913566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.913632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.913823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.913852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.914024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.914076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.914244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.914307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.914490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.914517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.914617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.914658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.914774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.914815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.914933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.914970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.915062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.915089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.915288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.915352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.915632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.481 [2024-11-19 03:16:42.915723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.481 qpair failed and we were unable to recover it. 00:35:32.481 [2024-11-19 03:16:42.915870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.915897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.915989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.916013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.916179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.916243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.916420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.916447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.916640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.916686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.916812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.916842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.916971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.917000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.917117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.917153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.917261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.917302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.917429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.917501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.917598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.917626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.917747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.917774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.917891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.917916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.918015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.918040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.918158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.918187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.918274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.918303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.918393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.918421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.918507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.918535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.918657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.918686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.918823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.918853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.918972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.919001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.919167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.919195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.919305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.919332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.919460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.919487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.919655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.919685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.919822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.919850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.919968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.919996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.920081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.920108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.920203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.920230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.920368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.920396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.920486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.920513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.920607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.920635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.920728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.920756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.482 qpair failed and we were unable to recover it. 00:35:32.482 [2024-11-19 03:16:42.920841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.482 [2024-11-19 03:16:42.920869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.920979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.921006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.921112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.921140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.921219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.921256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.921347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.921384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.921464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.921491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.921574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.921601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.921744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.921772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.921852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.921880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.921990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.922017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.922092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.922119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.922210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.922242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.922390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.922418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.922559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.922586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.922675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.922711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.922832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.922859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.922952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.922985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.923101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.923129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.923219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.923246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.923362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.923389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.923521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.923562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.923700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.923748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.923901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.923929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.924084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.924136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.924281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.924333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.924436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.924463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.924570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.924597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.924712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.924748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.924868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.924897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.925072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.925137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.925312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.925394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.925575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.925602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.925720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.925758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.925868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.925895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.926010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.926035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.926215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.926280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.926497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.926561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.926759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.926787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.926910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.926939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.927058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.927086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.483 qpair failed and we were unable to recover it. 00:35:32.483 [2024-11-19 03:16:42.927248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.483 [2024-11-19 03:16:42.927307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.927415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.927442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.927554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.927581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.927695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.927734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.927828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.927854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.927936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.927971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.928082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.928109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.928220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.928248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.928361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.928388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.928529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.928556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.928644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.928672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.928805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.928838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.928958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.928985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.929132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.929159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.929238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.929264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.929351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.929380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.929506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.929547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.929644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.929673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.929799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.929827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.929914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.929940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.930050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.930077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.930167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.930195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.930338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.930366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.930479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.930506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.930587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.930613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.930698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.930723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.930820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.930846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.930924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.930961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.931049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.931118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.931360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.931425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.484 qpair failed and we were unable to recover it. 00:35:32.484 [2024-11-19 03:16:42.931568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.484 [2024-11-19 03:16:42.931597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.931752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.931779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.931866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.931893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.932087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.932139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.932293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.932348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.932465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.932497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.932595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.932622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.932748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.932774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.932896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.932942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.933191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.933243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.933477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.933530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.933671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.933713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.933848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.933875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.934110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.934164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.934386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.934438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.934547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.934574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.934695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.934725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.934893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.934920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.935030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.935093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.935307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.935357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.935471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.935498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.935630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.935671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.935840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.935870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.935973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.936000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.485 [2024-11-19 03:16:42.936075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.485 [2024-11-19 03:16:42.936100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.485 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.936192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.936219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.936340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.936366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.936455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.936484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.936606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.936633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.936746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.936786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.936879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.936906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.937026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.937053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.937176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.937202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.937344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.937371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.937551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.937577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.937672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.937720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.937828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.937858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.937952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.937983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.938065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.938093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.938312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.938340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.938426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.938454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.938538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.938565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.938675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.938708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.938831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.938858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.938949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.938982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.939125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.939151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.939265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.939291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.939370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.939400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.939546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.939575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.939664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.939697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.939819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.939846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.939974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.940001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.940087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.940114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.940204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.940232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.940353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.940380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.940497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.940524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.940641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.940668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.940801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.940829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.941039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.941104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.941316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.941371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.941488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.941516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.941639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.941666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.486 qpair failed and we were unable to recover it. 00:35:32.486 [2024-11-19 03:16:42.941809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.486 [2024-11-19 03:16:42.941837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.941950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.941985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.942108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.942135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.942249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.942276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.942394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.942423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.942544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.942571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.942716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.942754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.942895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.942922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.943015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.943042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.943154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.943182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.943298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.943326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.943437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.943465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.943546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.943573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.943699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.943749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.943874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.943914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.944002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.944031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.944153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.944208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.944292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.944318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.944434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.944462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.944554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.944582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.944668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.944702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.944840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.944880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.944986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.945013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.945109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.945138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.945252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.945280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.945420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.945447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.945563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.945590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.945669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.945702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.945836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.945866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.946083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.946150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.946427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.946481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.946559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.946585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.946728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.946759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.946876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.946902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.947133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.947191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.947414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.947465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.947582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.947609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.947728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.487 [2024-11-19 03:16:42.947766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.487 qpair failed and we were unable to recover it. 00:35:32.487 [2024-11-19 03:16:42.947922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.947970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.948090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.948120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.948229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.948298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.948538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.948591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.948678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.948714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.948985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.949013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.949157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.949184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.949275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.949303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.949383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.949411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.949508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.949549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.949676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.949717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.949869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.949897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.950106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.950170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.950288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.950316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.950425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.950452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.950561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.950594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.950748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.950775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.950857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.950887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.951017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.951045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.951189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.951216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.951335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.951363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.951480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.951509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.951592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.951620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.951770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.951799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.951939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.952000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.952176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.952226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.952378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.952438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.952545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.952585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.952717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.952753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.952872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.952899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.953025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.953053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.953165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.953192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.953339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.953367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.953486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.953520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.953671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.953705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.953822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.953849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.954001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.954028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.488 qpair failed and we were unable to recover it. 00:35:32.488 [2024-11-19 03:16:42.954170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.488 [2024-11-19 03:16:42.954197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.954337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.954365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.954481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.954510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.954627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.954654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.954787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.954816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.954908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.954935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.955107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.955157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.955300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.955358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.955501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.955530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.955646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.955674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.955782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.955810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.955972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.956027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.956107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.956135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.956306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.956358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.956473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.956513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.956634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.956661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.956793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.956821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.956943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.956976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.957121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.957153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.957275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.957327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.957442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.957470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.957597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.957624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.957749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.957777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.957892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.957918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.958032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.958059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.958155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.958182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.958338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.958367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.958481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.958509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.958621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.958648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.958744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.958770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.958847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.958874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.958992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.489 [2024-11-19 03:16:42.959019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.489 qpair failed and we were unable to recover it. 00:35:32.489 [2024-11-19 03:16:42.959166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.959194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.959313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.959341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.959426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.959454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.959598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.959627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.959714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.959750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.959861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.959888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.960018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.960045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.960189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.960250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.960326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.960352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.960460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.960486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.960618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.960658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.960779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.960820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.961021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.961069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.961292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.961341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.961471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.961523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.961616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.961643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.961767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.961794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.961885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.961912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.962029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.962058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.962229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.962283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.962367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.962393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.962510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.962542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.962650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.962681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.962817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.962845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.963005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.963060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.963240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.963267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.963491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.963549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.963666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.963699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.963817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.963845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.963944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.963986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.964076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.964103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.964213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.964241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.964374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.964400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.964654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.964681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.964801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.964827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.964936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.964970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.965134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.965196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.490 [2024-11-19 03:16:42.965445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.490 [2024-11-19 03:16:42.965510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.490 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.965698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.965739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.965884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.965912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.966031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.966058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.966226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.966254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.966435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.966489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.966632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.966663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.966821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.966849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.967002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.967067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.967314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.967379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.967578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.967606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.967702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.967736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.967877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.967903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.967988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.968012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.968086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.968135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.968403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.968429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.968675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.968712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.968807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.968832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.968914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.968940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.969028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.969067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.969247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.969304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.969502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.969557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.969675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.969709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.969826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.969853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.969963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.970002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.970243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.970310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.970547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.970597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.970686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.970722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.970803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.970828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.970943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.971022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.971189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.971218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.971369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.971397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.971526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.971567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.971658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.971698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.971833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.971861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.972001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.972029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.972115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.972143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.972263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.972295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.491 [2024-11-19 03:16:42.972445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.491 [2024-11-19 03:16:42.972474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.491 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.972603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.972642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.972752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.972781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.972924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.972957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.973201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.973246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.973385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.973457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.973666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.973701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.973832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.973859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.973974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.974001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.974167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.974237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.974412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.974439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.974579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.974606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.974709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.974738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.974823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.974849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.974988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.975015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.975133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.975159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.975240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.975265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.975428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.975469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.975567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.975595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.975699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.975727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.975842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.975869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.975982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.976009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.976115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.976142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.976258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.976284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.976426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.976453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.976578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.976620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.976709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.976736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.976851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.976878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.976993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.977020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.977173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.977239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.977526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.977593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.977778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.977805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.977905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.977946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.978135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.978165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.978342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.978388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.978533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.978564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.978670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.978720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.978878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.492 [2024-11-19 03:16:42.978908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.492 qpair failed and we were unable to recover it. 00:35:32.492 [2024-11-19 03:16:42.979022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.979049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.979244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.979317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.979475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.979509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.979621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.979648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.979760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.979789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.979914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.979948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.980139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.980206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.980294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.980326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.980411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.980439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.980552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.980591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.980671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.980702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.980853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.980894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.981042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.981071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.981213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.981239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.981324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.981348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.981432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.981458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.981574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.981600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.981681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.981717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.981859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.981886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.982002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.982029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.982112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.982138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.982320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.982372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.982452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.982477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.982558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.982587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.982679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.982722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.982845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.982874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.982987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.983015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.983177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.983230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.983412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.983470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.983589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.983626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.983720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.983747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.983876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.983924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.984054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.984083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.984171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.984200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.984385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.984441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.984559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.984598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.984709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.984735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.493 [2024-11-19 03:16:42.984882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.493 [2024-11-19 03:16:42.984910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.493 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.985041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.985070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.985186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.985213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.985362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.985391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.985490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.985518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.985639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.985667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.985754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.985781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.985857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.985882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.985973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.986001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.986142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.986169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.986310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.986338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.986421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.986447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.986567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.986597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.986743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.986773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.986863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.986889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.987109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.987173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.987325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.987371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.987510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.987538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.987626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.987654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.987748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.987774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.987894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.987921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.988074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.988127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.988222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.988249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.988393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.988446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.988569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.988598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.988714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.988743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.988862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.988887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.989001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.989029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.989151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.989178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.989309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.989349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.989473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.989501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.989643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.989670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.494 [2024-11-19 03:16:42.989794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.494 [2024-11-19 03:16:42.989822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.494 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.989903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.989929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.990155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.990215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.990434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.990492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.990606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.990635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.990771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.990817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.990944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.990974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.991207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.991263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.991400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.991460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.991575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.991604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.991718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.991746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.991873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.991900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.992050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.992078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.992194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.992222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.992367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.992396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.992486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.992514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.992652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.992700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.992822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.992862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.993027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.993091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.993260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.993318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.993467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.993504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.993650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.993677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.993824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.993851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.993958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.994007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.994254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.994322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.994505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.994569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.994729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.994756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.994872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.994897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.994992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.995018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.995134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.995214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.995439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.995503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.995674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.995708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.995802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.995832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.995920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.995946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.996060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.996086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.996250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.996322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.996583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.996610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.996743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.996785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.495 [2024-11-19 03:16:42.996896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.495 [2024-11-19 03:16:42.996937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.495 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.997086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.997115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.997256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.997284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.997423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.997450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.997582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.997623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.997767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.997795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.997913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.997940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.998022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.998046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.998187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.998253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.998544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.998571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.998718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.998747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.998888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.998915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.999008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.999033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.999116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.999142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.999298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.999349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.999436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.999462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.999585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.999613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.999735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.999761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:42.999889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:42.999930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.000061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.000116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.000236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.000264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.000450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.000524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.000730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.000757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.000871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.000896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.000979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.001004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.001111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.001136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.001249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.001331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.001445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.001474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.001587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.001615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.001759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.001787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.001899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.001926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.002042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.002069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.002165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.002193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.002316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.002346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.002432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.002459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.002551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.002579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.002699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.002727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.002839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.002867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.496 qpair failed and we were unable to recover it. 00:35:32.496 [2024-11-19 03:16:43.002963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-11-19 03:16:43.002991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.003144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.003213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.003523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.003550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.003656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.003682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.003773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.003798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.003912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.003939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.004050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.004106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.004337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.004391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.004478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.004509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.004600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.004629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.004750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.004779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.004895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.004923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.005014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.005041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.005128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.005156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.005263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.005290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.005407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.005435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.005550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.005578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.005708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.005735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.005854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.005880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.005990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.006017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.006130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.006156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.006263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.006289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.006401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.006427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.006550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.006585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.006703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.006733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.006877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.006905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.007078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.007133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.007220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.007247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.007328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.007357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.007479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.007506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.007607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.007648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.007772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.007802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.007920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.007948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.008129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.008156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.008327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.008377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.008516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.008544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.008633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.008659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.497 [2024-11-19 03:16:43.008782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-11-19 03:16:43.008823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.497 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.008951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.008981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.009153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.009212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.009321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.009395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.009485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.009511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.009630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.009658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.009761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.009788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.009870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.009898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.010042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.010070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.010153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.010179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.010295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.010323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.010413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.010439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.010565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.010605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.010709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.010745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.010864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.010891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.010999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.011026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.011147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.011175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.011318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.011347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.011465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.011492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.011582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.011610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.011729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.011757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.011848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.011875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.011951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.011977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.012119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.012146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.012235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.012260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.012344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.012372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.012467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.012495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.012614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.012643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.012753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.012793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.012888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.012916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.013010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.013082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.013259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.013340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.013535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.013562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.013710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.013739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.013831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.013858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.013972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.014000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.014162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.014222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.498 [2024-11-19 03:16:43.014310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-11-19 03:16:43.014337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.498 qpair failed and we were unable to recover it. 00:35:32.499 [2024-11-19 03:16:43.014416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.499 [2024-11-19 03:16:43.014443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.499 qpair failed and we were unable to recover it. 00:35:32.499 [2024-11-19 03:16:43.014533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.499 [2024-11-19 03:16:43.014561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.499 qpair failed and we were unable to recover it. 00:35:32.499 [2024-11-19 03:16:43.014680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.499 [2024-11-19 03:16:43.014712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.499 qpair failed and we were unable to recover it. 00:35:32.499 [2024-11-19 03:16:43.014835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.499 [2024-11-19 03:16:43.014864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.499 qpair failed and we were unable to recover it. 00:35:32.499 [2024-11-19 03:16:43.015012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.499 [2024-11-19 03:16:43.015039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.499 qpair failed and we were unable to recover it. 00:35:32.499 [2024-11-19 03:16:43.015148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.499 [2024-11-19 03:16:43.015211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.499 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.015437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.015490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.015582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.015608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.015714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.015742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.015869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.015897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.016006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.016033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.016144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.016172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.016287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.016315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.016397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.016423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.016537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.016564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.016681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.016722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.016835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.016863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.016979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.017006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.017088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.017117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.017231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.017258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.017349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.017375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.017478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.017519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.017676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.017720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.017843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.017872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.017956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.018011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.018290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.018356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.018684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.018763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.018884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.018911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.019020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.019046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.794 [2024-11-19 03:16:43.019186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.794 [2024-11-19 03:16:43.019238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.794 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.019461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.019516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.019631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.019659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.019818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.019846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.019988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.020015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.020103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.020130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.020308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.020352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.020470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.020498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.020603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.020644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.020782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.020812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.020903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.020931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.021017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.021045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.021171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.021199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.021370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.021432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.021524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.021552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.021669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.021701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.021816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.021843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.021938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.021965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.022057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.022095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.022217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.022244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.022331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.022356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.022471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.022531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.022724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.022751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.022870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.022897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.023005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.023068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.023247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.023318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.023621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.023649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.023760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.023789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.023897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.023924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.024006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.024033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.024124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.024151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.024233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.024258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.024347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.024374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.024467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.024495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.024584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.024611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.024734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.024764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.024851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.024876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.795 [2024-11-19 03:16:43.025016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.795 [2024-11-19 03:16:43.025043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.795 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.025128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.025153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.025325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.025351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.025445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.025472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.025551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.025578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.025680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.025727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.025842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.025882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.026003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.026033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.026148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.026176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.026294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.026351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.026510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.026538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.026660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.026696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.026802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.026832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.026919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.026946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.027025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.027051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.027171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.027198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.027286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.027321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.027414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.027442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.027560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.027587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.027672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.027708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.027826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.027853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.027945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.027973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.028085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.028113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.028198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.028224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.028314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.028344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.028461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.028490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.028589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.028629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.028760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.028789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.028903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.028930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.029017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.029095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.029400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.029456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.029583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.029611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.029772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.029813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.029931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.029960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.030061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.030088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.030174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.030201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.030313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.030341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.030458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.030484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.796 qpair failed and we were unable to recover it. 00:35:32.796 [2024-11-19 03:16:43.030601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.796 [2024-11-19 03:16:43.030630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.030714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.030740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.030824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.030849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.030932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.030960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.031066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.031093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.031190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.031217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.031296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.031322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.031440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.031467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.031578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.031605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.031720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.031749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.031827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.031852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.031956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.031982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.032078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.032104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.032221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.032249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.032359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.032385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.032467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.032493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.032613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.032644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.032775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.032803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.032895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.032922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.033073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.033100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.033267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.033317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.033432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.033461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.033578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.033604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.033705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.033746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.033873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.033902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.034020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.034049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.034232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.034288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.034466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.034493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.034607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.034634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.034774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.034801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.034921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.034949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.035042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.035070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.035164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.035193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.035302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.035329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.035503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.035553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.035708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.035736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.035841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.035882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.035978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.797 [2024-11-19 03:16:43.036006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.797 qpair failed and we were unable to recover it. 00:35:32.797 [2024-11-19 03:16:43.036129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.036186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.036407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.036461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.036573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.036601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.036715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.036743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.036888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.036917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.037007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.037033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.037144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.037170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.037283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.037314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.037437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.037464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.037577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.037604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.037718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.037745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.037879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.037911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.037999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.038028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.038149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.038177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.038318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.038345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.038458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.038486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.038561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.038587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.038729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.038767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.038892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.038933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.039030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.039059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.039160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.039189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.039284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.039311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.039423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.039450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.039566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.039593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.039672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.039704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.039801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.039841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.039939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.039967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.040066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.040095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.040177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.040203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.040344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.040371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.040457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.040484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.040597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.040625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.040710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.040737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.798 qpair failed and we were unable to recover it. 00:35:32.798 [2024-11-19 03:16:43.040849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.798 [2024-11-19 03:16:43.040877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.040995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.041024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.041166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.041192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.041312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.041338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.041452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.041480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.041570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.041595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.041709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.041738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.041857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.041884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.042014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.042055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.042156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.042185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.042300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.042327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.042411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.042437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.042582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.042609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.042729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.042757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.042908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.042940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.043121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.043149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.043238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.043265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.043403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.043432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.043517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.043543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.043640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.043681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.043850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.043878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.043977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.044018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.044119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.044149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.044296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.044325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.044409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.044433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.044525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.044554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.044666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.044701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.044792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.044818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.044940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.044968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.045056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.045083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.045164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.045192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.045339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.045368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.045490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.045532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.045642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.045670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.045763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.045789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.799 [2024-11-19 03:16:43.045937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.799 [2024-11-19 03:16:43.045964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.799 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.046093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.046120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.046213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.046240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.046362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.046390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.046519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.046560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.046649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.046679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.046774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.046801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.046886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.046913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.047003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.047029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.047110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.047142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.047219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.047245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.047337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.047367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.047464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.047493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.047574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.047602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.047699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.047727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.047811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.047838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.048004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.048056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.048280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.048333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.048528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.048581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.048727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.048756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.048891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.048920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.049044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.049104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.049247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.049296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.049435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.049463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.049577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.049602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.049702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.049742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.049890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.049917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.050028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.050055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.050148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.050175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.050297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.050326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.050444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.050472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.050591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.050618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.050745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.050774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.050900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.050929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.051017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.051047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.051224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.051277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.051460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.051518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.800 [2024-11-19 03:16:43.051658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.800 [2024-11-19 03:16:43.051686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.800 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.051812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.051839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.051925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.051953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.052034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.052062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.052177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.052204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.052323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.052351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.052462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.052489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.052628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.052656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.052751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.052780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.052903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.052935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.053051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.053079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.053167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.053195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.053326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.053367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.053452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.053480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.053562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.053587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.053714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.053741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.053836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.053863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.053946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.053972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.054084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.054113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.054231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.054258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.054371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.054400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.054518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.054545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.054659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.054686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.054841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.054869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.055035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.055064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.055222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.055281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.055502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.055529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.055645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.055671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.055801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.055829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.055917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.055945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.056106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.056159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.056331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.056389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.056505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.056533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.056626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.056654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.056752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.056778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.056860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.056887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.057005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.057033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.057125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.057154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.801 qpair failed and we were unable to recover it. 00:35:32.801 [2024-11-19 03:16:43.057243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.801 [2024-11-19 03:16:43.057273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.057426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.057465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.057603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.057643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.057744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.057773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.057853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.057880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.057956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.057983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.058071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.058099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.058214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.058242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.058363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.058393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.058516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.058544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.058652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.058680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.058782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.058815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.058899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.058928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.059070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.059098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.059213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.059241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.059391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.059420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.059565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.059594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.059684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.059719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.059830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.059857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.059970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.059995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.060183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.060245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.060482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.060544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.060662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.060705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.060804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.060832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.060945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.060973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.061178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.061245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.061434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.061492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.061588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.061614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.061730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.061769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.061868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.061892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.062063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.062128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.062311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.062387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.062576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.062603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.062727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.062755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.062884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.062926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.063128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.063192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.063306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.063367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.063539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.802 [2024-11-19 03:16:43.063593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.802 qpair failed and we were unable to recover it. 00:35:32.802 [2024-11-19 03:16:43.063684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.063720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.063838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.063876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.063991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.064017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.064132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.064159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.064243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.064272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.064400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.064429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.064553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.064581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.064663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.064698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.064793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.064820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.064931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.064958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.065042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.065068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.065162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.065191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.065276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.065304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.065419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.065447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.065593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.065620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.065742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.065783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.065889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.065918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.066038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.066066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.066178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.066206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.066292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.066318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.066439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.066468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.066580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.066606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.066731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.066757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.066850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.066877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.066962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.066990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.067106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.067167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.067395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.067447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.067526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.067552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.067667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.067702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.067796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.067823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.067908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.067933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.068078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.068105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.068191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.803 [2024-11-19 03:16:43.068219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.803 qpair failed and we were unable to recover it. 00:35:32.803 [2024-11-19 03:16:43.068295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.068321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.068398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.068426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.068517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.068544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.068684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.068720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.068829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.068856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.068977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.069005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.069088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.069116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.069245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.069291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.069381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.069409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.069527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.069553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.069660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.069687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.069772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.069799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.069879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.069906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.070125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.070189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.070468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.070532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.070750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.070779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.070897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.070926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.071018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.071046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.071165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.071192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.071303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.071331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.071411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.071437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.071523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.071552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.071665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.071696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.071840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.071867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.071991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.072017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.072103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.072129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.072247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.072273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.072493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.072550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.072632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.072659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.072740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.072766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.072872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.072899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.073003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.073043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.073211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.073276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.073386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.073454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.073635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.073667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.073771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.073797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.073910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.073936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.804 [2024-11-19 03:16:43.074028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.804 [2024-11-19 03:16:43.074054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.804 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.074309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.074335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.074574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.074638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.074824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.074865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.074963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.074991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.075114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.075145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.075304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.075356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.075516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.075572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.075646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.075671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.075830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.075869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.076008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.076035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.076147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.076173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.076272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.076301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.076409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.076450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.076563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.076603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.076740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.076769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.076886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.076915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.077000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.077027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.077144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.077171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.077260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.077287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.077411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.077443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.077538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.077567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.077645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.077671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.077776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.077803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.077925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.077957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.078077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.078104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.078332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.078400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.078550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.078579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.078670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.078706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.078851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.078879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.079005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.079032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.079119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.079146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.079264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.079292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.079402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.079430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.079523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.079549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.079700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.079728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.079823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.079850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.079978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.080042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.805 qpair failed and we were unable to recover it. 00:35:32.805 [2024-11-19 03:16:43.080208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.805 [2024-11-19 03:16:43.080272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.080596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.080629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.080734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.080762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.080846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.080873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.081059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.081126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.081355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.081407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.081488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.081514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.081662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.081697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.081784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.081812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.081966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.081994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.082143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.082204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.082377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.082445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.082624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.082653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.082782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.082823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.082945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.082972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.083117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.083144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.083284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.083339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.083435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.083463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.083586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.083629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.083751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.083778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.083895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.083922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.084071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.084098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.084188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.084216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.084309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.084337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.084456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.084483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.084597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.084625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.084748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.084786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.084933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.084962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.085075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.085103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.085184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.085210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.085322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.085349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.085472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.085513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.085634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.085662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.085760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.085789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.085905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.085933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.086026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.086054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.086140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.086167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.086292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.806 [2024-11-19 03:16:43.086359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.806 qpair failed and we were unable to recover it. 00:35:32.806 [2024-11-19 03:16:43.086477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.086505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.086634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.086675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.086834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.086862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.086958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.086986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.087135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.087192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.087422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.087475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.087593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.087625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.087746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.087772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.087891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.087922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.088016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.088042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.088127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.088155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.088242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.088271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.088388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.088417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.088534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.088561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.088642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.088669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.088798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.088838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.088929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.088957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.089067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.089093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.089176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.089203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.089292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.089332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.089459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.089488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.089605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.089634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.089753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.089780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.089871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.089897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.090041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.090096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.090174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.090199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.090303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.090330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.090482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.090511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.090629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.090657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.090760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.090787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.090899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.090926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.091081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.091140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.091416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.091480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.091665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.091705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.091849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.091877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.091996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.092024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.092173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.092237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.807 [2024-11-19 03:16:43.092324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.807 [2024-11-19 03:16:43.092351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.807 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.092441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.092472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.092593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.092621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.092711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.092739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.092858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.092886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.093023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.093063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.093163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.093191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.093303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.093330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.093469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.093496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.093589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.093630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.093783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.093813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.093909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.093938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.094063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.094090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.094180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.094208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.094287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.094314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.094408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.094436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.094556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.094587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.094706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.094735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.094885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.094918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.095006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.095032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.095148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.095175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.095293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.095318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.095467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.095492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.095597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.095624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.095714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.095741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.095825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.095852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.095945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.095974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.096120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.096173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.096347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.096404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.096485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.096511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.096640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.096681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.096811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.096852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.096969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.097040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.097234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.808 [2024-11-19 03:16:43.097300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.808 qpair failed and we were unable to recover it. 00:35:32.808 [2024-11-19 03:16:43.097572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.097617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.097715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.097740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.097836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.097864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.097954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.097982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.098070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.098099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.098214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.098285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.098470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.098497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.098591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.098632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.098787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.098817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.098906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.098934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.099125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.099184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.099364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.099427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.099555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.099584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.099731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.099759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.099858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.099887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.100007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.100035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.100116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.100144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.100316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.100368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.100483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.100510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.100622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.100649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.100786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.100816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.100921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.100948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.101089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.101137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.101224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.101253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.101342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.101369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.101464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.101491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.101598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.101627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.101713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.101740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.101819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.101847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.101997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.102025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.102167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.102194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.102280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.102307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.102388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.102413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.102522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.102563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.102684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.102721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.102844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.102870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.102950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.103033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.809 [2024-11-19 03:16:43.103175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.809 [2024-11-19 03:16:43.103242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.809 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.103416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.103483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.103629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.103657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.103767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.103807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.103934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.103963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.104148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.104204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.104284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.104310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.104429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.104456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.104574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.104601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.104721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.104749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.104861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.104889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.104999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.105027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.105109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.105136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.105246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.105273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.105361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.105394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.105470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.105496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.105641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.105670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.105792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.105822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.105957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.105985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.106099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.106126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.106239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.106267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.106395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.106436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.106532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.106560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.106707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.106736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.106824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.106851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.106969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.106996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.107116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.107142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.107262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.107319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.107417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.107447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.107540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.107567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.107687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.107722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.107838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.107866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.107977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.108004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.108113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.108139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.108258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.108285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.108393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.108420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.108559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.108585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.108704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.108732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.810 qpair failed and we were unable to recover it. 00:35:32.810 [2024-11-19 03:16:43.108850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.810 [2024-11-19 03:16:43.108880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.109055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.109154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.109323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.109412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.109500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.109534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.109625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.109652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.109762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.109790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.109881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.109907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.110033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.110091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.110316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.110367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.110496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.110524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.110645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.110671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.110806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.110832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.110924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.110951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.111066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.111092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.111179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.111205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.111317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.111342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.111433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.111460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.111575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.111601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.111694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.111721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.111839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.111865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.111974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.112001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.112087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.112114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.112195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.112223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.112337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.112363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.112437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.112464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.112585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.112611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.112707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.112749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.112844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.112873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.112971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.113011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.113207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.113279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.113451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.113478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.113564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.113590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.113681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.113717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.113809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.113835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.113952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.113980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.114072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.114127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.114296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.114322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.114437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.811 [2024-11-19 03:16:43.114463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.811 qpair failed and we were unable to recover it. 00:35:32.811 [2024-11-19 03:16:43.114575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.114603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.114683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.114718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.114797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.114824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.114930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.114956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.115071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.115127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.115265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.115334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.115536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.115569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.115728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.115769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.115890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.115919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.116016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.116045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.116158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.116185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.116277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.116304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.116394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.116423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.116541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.116568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.116650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.116676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.116778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.116805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.116921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.116947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.117034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.117060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.117243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.117307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.117472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.117503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.117599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.117627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.117729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.117760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.117882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.117910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.118019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.118046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.118212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.118272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.118455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.118525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.118677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.118710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.118817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.118844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.118931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.118957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.119046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.119073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.119151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.119177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.119295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.119321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.119434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.119468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.812 [2024-11-19 03:16:43.119584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.812 [2024-11-19 03:16:43.119611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.812 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.119710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.119739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.119824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.119851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.119955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.119982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.120070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.120098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.120212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.120240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.120369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.120409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.120528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.120556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.120703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.120743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.120864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.120893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.120987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.121015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.121174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.121226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.121315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.121344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.121567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.121596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.121698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.121726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.121841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.121868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.122025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.122092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.122256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.122333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.122518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.122546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.122667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.122699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.122791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.122819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.122911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.122939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.123091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.123144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.123274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.123325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.123409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.123437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.123578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.123604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.123736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.123783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.123882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.123912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.124094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.124148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.124303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.124329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.124416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.124442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.124535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.124562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.124676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.124718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.124837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.124863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.124973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.125000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.125119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.125146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.125225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.125251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.125360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.125386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.813 qpair failed and we were unable to recover it. 00:35:32.813 [2024-11-19 03:16:43.125472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.813 [2024-11-19 03:16:43.125499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.125630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.125671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.125791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.125832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.125963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.125991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.126136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.126189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.126295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.126357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.126463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.126490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.126604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.126632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.126752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.126783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.126871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.126898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.126983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.127010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.127141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.127206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.127396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.127462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.127613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.127639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.127733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.127762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.127898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.127939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.128088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.128139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.128322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.128374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.128559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.128586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.128702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.128741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.128818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.128845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.128936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.128964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.129052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.129081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.129226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.129295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.129518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.129585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.129771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.129798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.129881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.129909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.129984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.130011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.130102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.130133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.130225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.130293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.130596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.130623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.130766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.130792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.130870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.130897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.130989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.131016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.131177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.131243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.131455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.131519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.131717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.131757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.131854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.131880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.814 qpair failed and we were unable to recover it. 00:35:32.814 [2024-11-19 03:16:43.131994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.814 [2024-11-19 03:16:43.132020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.132105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.132132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.132245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.132286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.132411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.132452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.132547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.132575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.132665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.132697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.132787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.132814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.132932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.132958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.133069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.133095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.133400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.133466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.133660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.133687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.133783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.133810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.133894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.133921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.134004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.134030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.134151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.134177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.134296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.134327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.134427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.134467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.134589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.134624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.134748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.134774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.134861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.134887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.134971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.134998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.135093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.135121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.135215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.135244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.135362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.135392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.135518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.135546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.135665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.135704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.135801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.135829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.135946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.135973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.136053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.136081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.136176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.136203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.136324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.136353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.136442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.136469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.136587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.136614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.136706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.136733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.136875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.136902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.137029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.137056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.137175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.137203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.137350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.815 [2024-11-19 03:16:43.137378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.815 qpair failed and we were unable to recover it. 00:35:32.815 [2024-11-19 03:16:43.137495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.137523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.137643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.137670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.137819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.137859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.137957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.138039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.138208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.138278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.138602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.138666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.138819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.138853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.138943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.138997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.139153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.139203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.139362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.139422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.139630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.139657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.139754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.139781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.139873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.139900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.140053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.140117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.140375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.140439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.140636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.140675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.140814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.140853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.140977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.141006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.141121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.141147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.141374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.141429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.141522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.141549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.141659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.141687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.141809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.141840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.141922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.141950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.142174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.142230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.142470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.142522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.142637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.142664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.142818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.142858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.142950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.143013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.143260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.143325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.143568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.143632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.143835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.143865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.143978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.144005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.144103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.144130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.144322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.144381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.144522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.144550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.144667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.144699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.144791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.816 [2024-11-19 03:16:43.144818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.816 qpair failed and we were unable to recover it. 00:35:32.816 [2024-11-19 03:16:43.144911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.144937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.145014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.145040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.145155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.145181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.145294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.145321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.145442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.145468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.145583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.145609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.145772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.145813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.145945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.145984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.146119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.146165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.146281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.146309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.146425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.146452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.146570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.146597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.146715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.146743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.146841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.146872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.147045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.147101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.147285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.147338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.147422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.147449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.147565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.147594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.147671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.147705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.147822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.147849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.147965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.147993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.148132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.148161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.148410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.148470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.148613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.148640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.148719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.148746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.148864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.148892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.149031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.149099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.149312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.149365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.149442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.149469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.149590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.149619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.149711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.149741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.149856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.149884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.817 [2024-11-19 03:16:43.150062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.817 [2024-11-19 03:16:43.150125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.817 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.150343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.150394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.150515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.150542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.150623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.150655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.150787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.150815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.150924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.150952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.151096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.151123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.151333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.151395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.151474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.151501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.151609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.151636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.151731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.151758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.151849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.151876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.151969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.151997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.152110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.152137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.152256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.152284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.152369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.152398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.152541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.152567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.152728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.152768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.152861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.152888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.153030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.153095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.153340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.153405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.153613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.153640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.153758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.153785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.153891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.153917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.154003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.154028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.154210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.154280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.154431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.154488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.154598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.154625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.154712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.154741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.154832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.154857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.154992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.155044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.155262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.155315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.155400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.155428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.155518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.155545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.155678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.155741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.155844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.155872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.155984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.156012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.156278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.156343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.818 qpair failed and we were unable to recover it. 00:35:32.818 [2024-11-19 03:16:43.156552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.818 [2024-11-19 03:16:43.156604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.156753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.156782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.156866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.156892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.157014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.157040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.157180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.157207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.157386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.157446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.157564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.157590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.157708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.157744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.157862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.157889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.158055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.158107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.158194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.158222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.158338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.158368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.158479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.158517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.158653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.158700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.158835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.158863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.159000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.159040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.159161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.159190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.159284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.159312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.159424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.159450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.159533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.159562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.159670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.159712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.159860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.159889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.159980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.160007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.160119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.160146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.160271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.160335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.160509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.160537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.160662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.160701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.160824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.160853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.160970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.160998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.161088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.161117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.161206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.161233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.161330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.161358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.161498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.161531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.161617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.161644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.161747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.161787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.161872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.161899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.162015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.162052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.162137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.162163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.819 qpair failed and we were unable to recover it. 00:35:32.819 [2024-11-19 03:16:43.162303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.819 [2024-11-19 03:16:43.162330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.162412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.162438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.162550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.162583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.162704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.162734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.162839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.162879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.163067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.163118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.163318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.163346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.163565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.163593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.163703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.163729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.163837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.163865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.163948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.163974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.164201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.164259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.164410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.164469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.164561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.164587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.164703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.164731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.164815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.164845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.164963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.165002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.165091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.165117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.165260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.165300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.165454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.165481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.165575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.165605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.165704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.165751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.165906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.165945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.166067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.166095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.166208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.166236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.166379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.166406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.166547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.166574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.166710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.166739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.166863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.166890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.166972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.167000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.167082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.167111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.167243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.167282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.167405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.167433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.167577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.167603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.167723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.167757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.167875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.167901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.167976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.168001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.168125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.820 [2024-11-19 03:16:43.168154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.820 qpair failed and we were unable to recover it. 00:35:32.820 [2024-11-19 03:16:43.168275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.168303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.168448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.168477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.168593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.168620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.168739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.168766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.168856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.168884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.169030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.169057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.169164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.169191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.169332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.169379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.169497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.169526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.169651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.169698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.169826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.169854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.169999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.170065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.170331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.170357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.170506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.170533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.170613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.170642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.170792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.170819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.170899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.170926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.171125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.171192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.171374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.171427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.171547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.171588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.171715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.171744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.171892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.171918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.172106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.172170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.172494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.172526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.172719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.172746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.172830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.172856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.172949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.172977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.173068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.173095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.173324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.173386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.173470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.173498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.173612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.173640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.173776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.173805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.173921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.173949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.174061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.174089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.174174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.174201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.174313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.174340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.174452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.174479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.174597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.821 [2024-11-19 03:16:43.174636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.821 qpair failed and we were unable to recover it. 00:35:32.821 [2024-11-19 03:16:43.174729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.174757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.174858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.174885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.175000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.175027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.175257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.175323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.175603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.175631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.175750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.175778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.175920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.175958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.176140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.176193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.176412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.176439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.176554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.176591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.176725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.176767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.176939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.177006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.177351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.177420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.177586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.177610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.177751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.177778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.177907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.177955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.178087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.178116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.178307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.178334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.178552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.178604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.178727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.178754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.178846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.178873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.178994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.179028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.179268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.179320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.179430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.179457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.179548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.179576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.179702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.179744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.179821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.179848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.179967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.180028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.180197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.180263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.180589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.180653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.180839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.180864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.180976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.181002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.181198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.181262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.822 qpair failed and we were unable to recover it. 00:35:32.822 [2024-11-19 03:16:43.181405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.822 [2024-11-19 03:16:43.181481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.181704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.181746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.181877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.181908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.181993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.182021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.182188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.182241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.182400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.182452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.182574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.182602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.182728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.182756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.182877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.182905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.183025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.183054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.183144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.183171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.183281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.183309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.183399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.183425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.183545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.183572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.183716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.183753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.183893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.183919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.184026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.184053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.184138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.184166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.184261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.184289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.184373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.184407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.184497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.184525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.184612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.184640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.184767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.184807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.184892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.184921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.185040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.185067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.185250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.185314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.185491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.185559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.185643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.185670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.185805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.185835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.185982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.186010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.186129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.186156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.186334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.186396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.186536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.186563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.186727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.186757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.186874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.186902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.187025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.187052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.187264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.187322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.187436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.823 [2024-11-19 03:16:43.187465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.823 qpair failed and we were unable to recover it. 00:35:32.823 [2024-11-19 03:16:43.187551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.187578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.187699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.187737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.187854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.187881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.187965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.187993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.188103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.188130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.188278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.188332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.188447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.188474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.188585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.188612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.188768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.188808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.188914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.188954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.189079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.189108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.189226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.189253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.189368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.189395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.189484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.189513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.189643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.189671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.189793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.189822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.189965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.189992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.190112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.190138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.190229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.190257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.190343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.190370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.190484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.190512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.190627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.190658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.190761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.190790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.190902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.190931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.191019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.191046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.191262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.191324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.191502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.191565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.191685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.191717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.191871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.191897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.192075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.192101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.192325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.192378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.192490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.192517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.192632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.192659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.192793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.192820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.192939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.824 [2024-11-19 03:16:43.192978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.824 qpair failed and we were unable to recover it. 00:35:32.824 [2024-11-19 03:16:43.193100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.193127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.193268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.193296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.193410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.193438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.193581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.193608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.193751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.193791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.193909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.193938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.194049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.194075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.194190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.194217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.194344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.194409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.194574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.194639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.194845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.194886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.195128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.195178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.195373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.195432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.195517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.195553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.195701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.195741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.195852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.195879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.195971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.195999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.196095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.196124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.196299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.196358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.196443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.196470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.196581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.196609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.196752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.196778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.196898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.196925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.197013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.197039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.197119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.197145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.197259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.197287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.197406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.197434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.197554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.197581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.197703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.197732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.197829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.197856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.197986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.198025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.198153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.198181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.198269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.198296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.198416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.198443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.198581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.198608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.198731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.198772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.198921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.198949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.825 qpair failed and we were unable to recover it. 00:35:32.825 [2024-11-19 03:16:43.199066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.825 [2024-11-19 03:16:43.199144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.199466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.199531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.199748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.199777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.199891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.199932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.200046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.200113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.200284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.200341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.200430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.200457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.200574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.200603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.200687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.200726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.200814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.200842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.200961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.200988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.201170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.201240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.201386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.201438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.201565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.201592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.201713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.201742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.201884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.201911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.202025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.202057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.202138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.202165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.202335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.202382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.202499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.202525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.202638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.202664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.202789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.202816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.202893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.202920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.203029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.203056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.203143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.203170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.203257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.203284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.203358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.203385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.203492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.203519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.203606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.203633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.203741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.826 [2024-11-19 03:16:43.203769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.826 qpair failed and we were unable to recover it. 00:35:32.826 [2024-11-19 03:16:43.203863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.203890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.203980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.204020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.204149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.204178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.204301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.204327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.204439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.204466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.204608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.204635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.204757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.204785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.204871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.204898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.204985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.205012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.205189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.205245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.205499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.205563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.205719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.205746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.205863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.205890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.205976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.206004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.206118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.206146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.206236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.206263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.206414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.206463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.206557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.206583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.206754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.206795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.206900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.206929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.207054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.207082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.207226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.207254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.207364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.207391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.207475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.207502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.207621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.207648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.207773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.207802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.207942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.207970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.208216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.208279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.208507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.208558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.208674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.208715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.208831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.208859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.208941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.208968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.209107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.209134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.209212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.209239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.209341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.209368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.209483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.209510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.209626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.827 [2024-11-19 03:16:43.209653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.827 qpair failed and we were unable to recover it. 00:35:32.827 [2024-11-19 03:16:43.209801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.209828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.209919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.209948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.210056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.210083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.210184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.210211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.210325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.210352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.210465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.210491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.210613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.210641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.210770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.210799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.210911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.210938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.211161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.211215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.211380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.211450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.211595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.211623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.211708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.211736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.211854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.211882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.211972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.212000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.212125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.212152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.212294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.212326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.212410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.212438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.212553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.212580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.212701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.212728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.212871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.212898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.213010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.213038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.213169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.213198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.213313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.213340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.213453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.213479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.213599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.213628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.213719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.213746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.213823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.213850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.213962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.213989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.214103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.214130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.214248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.214275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.214391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.214419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.214517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.214544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.214656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.214683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.214808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.214836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.828 qpair failed and we were unable to recover it. 00:35:32.828 [2024-11-19 03:16:43.214923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.828 [2024-11-19 03:16:43.214951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.215067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.215095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.215171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.215198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.215342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.215369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.215487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.215514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.215654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.215681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.215796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.215823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.215923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.215950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.216070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.216098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.216209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.216236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.216348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.216375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.216487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.216514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.216595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.216622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.216748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.216788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.216939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.216967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.217061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.217088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.217181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.217208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.217313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.217340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.217421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.217448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.217553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.217579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.217670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.217705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.217822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.217853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.217972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.218000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.218150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.218215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.218485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.218549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.218728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.218756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.218834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.218861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.218949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.218975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.219052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.219078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.219168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.219251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.219439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.219504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.219708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.219735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.219827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.219853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.219957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.219984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.220186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.220251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.220479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.220544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.220803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.220830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.220959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.220999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.829 qpair failed and we were unable to recover it. 00:35:32.829 [2024-11-19 03:16:43.221118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.829 [2024-11-19 03:16:43.221147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.221242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.221269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.221416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.221443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.221559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.221587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.221673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.221708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.221799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.221840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.221964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.221993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.222112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.222139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.222288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.222316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.222465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.222492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.222570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.222602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.222693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.222721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.222839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.222866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.222985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.223013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.223095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.223123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.223239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.223265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.223375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.223402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.223491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.223520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.223664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.223696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.223817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.223844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.223930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.223957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.224097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.224124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.224259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.224323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.224437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.224464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.224549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.224575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.224700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.224728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.224815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.224843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.224931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.224958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.225043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.225070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.225182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.225209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.225325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.225351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.225470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.225498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.225574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.225602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.225710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.225738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.225855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.225882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.830 qpair failed and we were unable to recover it. 00:35:32.830 [2024-11-19 03:16:43.226007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.830 [2024-11-19 03:16:43.226034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.226130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.226156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.226286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.226315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.226403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.226430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.226546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.226573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.226687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.226720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.226811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.226838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.226950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.226977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.227088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.227116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.227224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.227250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.227332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.227360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.227473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.227499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.227608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.227635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.227755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.227782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.227886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.227914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.228087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.228142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.228227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.228254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.228370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.228397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.228513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.228540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.228629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.228657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.228773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.228812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.228939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.228966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.229060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.229088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.229203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.229231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.229344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.229370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.229513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.229540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.229649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.229675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.229779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.229806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.229887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.229969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.230182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.230248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.230550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.230614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.230778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.230808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.230940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.230997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.831 qpair failed and we were unable to recover it. 00:35:32.831 [2024-11-19 03:16:43.231185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.831 [2024-11-19 03:16:43.231242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.231409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.231455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.231600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.231627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.231775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.231805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.231899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.231926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.232096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.232149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.232339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.232395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.232488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.232516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.232607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.232633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.232782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.232822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.232939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.232979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.233124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.233152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.233269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.233296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.233461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.233526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.233783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.233811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.233927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.233964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.234083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.234111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.234279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.234340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.234573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.234637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.234850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.234876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.234964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.234991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.235232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.235297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.235530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.235557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.235773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.235800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.235917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.235953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.236044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.236071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.236249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.236312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.236524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.236582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.236850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.236877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.236979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.237005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.237120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.237147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.237383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.237458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.237766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.237793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.237902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.237928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.238021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.238090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.238304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.238368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.238655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.238753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.238897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-19 03:16:43.238923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-19 03:16:43.239022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.239082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.239352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.239416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.239631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.239658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.239812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.239839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.239959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.239985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.240136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.240185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.240411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.240477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.240653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.240680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.240811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.240838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.240975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.241001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.241139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.241206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.241401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.241467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.241770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.241798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.241887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.241913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.242011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.242037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.242275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.242340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.242576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.242602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.242737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.242779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.242882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.242911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.242994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.243020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.243100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.243126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.243220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.243246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.243355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.243383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.243470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.243498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.243623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.243652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.243783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.243816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.243902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.243927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.244011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.244037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.244157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.244183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.244268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.244293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.244440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.244467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.244574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.244600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.244710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.244737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.244858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.244885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.245000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.245026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.245168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.245195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.245288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.245314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.245429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-19 03:16:43.245456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-19 03:16:43.245544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.245571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.245801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.245828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.245948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.245975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.246146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.246235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.246332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.246358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.246539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.246604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.246798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.246825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.246940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.246967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.247173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.247238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.247589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.247654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.247846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.247871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.248064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.248128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.248380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.248448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.248739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.248766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.248910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.248941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.249240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.249267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.249511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.249582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.249790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.249817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.249962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.249989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.250195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.250260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.250543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.250607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.250813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.250841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.251017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.251080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.251248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.251324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.251515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.251566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.251766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.251792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.251898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.251924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.252113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.252179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.252497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.252569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.252781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.252806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.252922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.252951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.253146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.253210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.253467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.253531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.253764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.253801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.253951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.254015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.254251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.254313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.254553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.254619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.254846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.254872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-19 03:16:43.254962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-19 03:16:43.255029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.255264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.255331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.255624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.255710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.255879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.255905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.256080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.256145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.256315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.256397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.256605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.256632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.256751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.256775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.256891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.256917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.257013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.257039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.257180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.257207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.257352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.257393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.257552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.257591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.257743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.257770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.257887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.257913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.258049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.258090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.258256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.258298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.258449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.258489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.258764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.258807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.258936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.258975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.259174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.259244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.259529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.259592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.259855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.259897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.260056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.260122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.260416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.260483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.260752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.260794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.260961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.261038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.261305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.261370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.261666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.261752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.261881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.261923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.262176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.262240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.262466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.262531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.262782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.262824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.263021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.263086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.263391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.263456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.263747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.263789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.263933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.263974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.264175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-19 03:16:43.264253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-19 03:16:43.264524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.264588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.264849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.264891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.265067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.265151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.265450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.265514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.265760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.265805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.265991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.266056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.266261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.266336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.266641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.266720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.266920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.266961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.267150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.267214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.267429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.267494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.267720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.267781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.267918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.267966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.268264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.268335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.268559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.268623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.268846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.268887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.269158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.269222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.269512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.269577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.269803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.269844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.269994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.270028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.270203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.270245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.270410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.270451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.270598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.270639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.270813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.270847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.270998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.271032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.271313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.271377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.271632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.271709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.271853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.271887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.272064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.272103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.272227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.272260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.272409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.272442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.272587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.272660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.272819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.272853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-19 03:16:43.272971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-19 03:16:43.273010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.273113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.273167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.273291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.273332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.273525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.273566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.273749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.273784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.273885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.273919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.274059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.274093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.274230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.274264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.274383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.274429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.274565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.274599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.274730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.274766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.274878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.274912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.275029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.275070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.275191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.275226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.275352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.275395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.275532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.275583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.275768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.275803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.275909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.275954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.276053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.276087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.276217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.276251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.276357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.276391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.276515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.276550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.276661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.276702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.276815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.276849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.276980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.277015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.277128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.277161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.277274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.277307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.277431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.277471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.277603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.277637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.277793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.277827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.277946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.277990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.278112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.278146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.278317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.278350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.278523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.278557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.278670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.278711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.278830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.278863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.278973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.279006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.279156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.279190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-19 03:16:43.279345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-19 03:16:43.279379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.279521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.279554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.279707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.279753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.279862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.279895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.280003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.280041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.280161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.280205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.280323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.280356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.280492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.280526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.280729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.280764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.280877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.280911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.281038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.281081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.281225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.281270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.281413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.281446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.281544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.281577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.281666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.281707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.281804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.281836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.281953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.281987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.282147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.282181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.282327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.282361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.282502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.282547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.282710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.282763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.282880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.282917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.283072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.283106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.283228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.283264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.283380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.283423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.283573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.283607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.283756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.283792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.283901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.283936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.284053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.284088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.284219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.284254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.284395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.284429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.284590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.284623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.284767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.284803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.284948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.284984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.285128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.285162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.285318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.285351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.285471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.285506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.285645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.285679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-19 03:16:43.285807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-19 03:16:43.285840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.285986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.286031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.286179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.286213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.286322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.286356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.286492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.286530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.286675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.286733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.286879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.286913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.287056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.287091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.287232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.287268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.287455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.287490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.287600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.287634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.287777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.287829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.287949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.287986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.288099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.288135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.288322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.288357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.288521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.288588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.288781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.288817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.288932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.288974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.289145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.289180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.289389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.289426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.289578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.289613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.289715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.289757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.289866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.289902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.290140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.290175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.290283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.290318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.290495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.290528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.290733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.290767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.290883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.290917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.291071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.291103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.291253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.291288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.291489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.291566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.291735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.291769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.291888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.291924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.292073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.292109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.292387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.292422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.292574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.292606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.292728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.292759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.292865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.292895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.293016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.293048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.293274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-19 03:16:43.293308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-19 03:16:43.293443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.293479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.294627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.294669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.294864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.294897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.295048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.295080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.295187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.295228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.295335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.295372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.295481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.295513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.295674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.295714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.295829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.295862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.295993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.296024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.296180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.296243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.296430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.296470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.296661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.296704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.296830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.296859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.296971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.297000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.297106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.297142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.297255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.297284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.297452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.297484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.297628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.297659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.297814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.297845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.297945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.297977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.298076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.298108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.298209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.298242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.298428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.298462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.298574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.298608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.298722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.298773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.298913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.298945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.299045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.299076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.299196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.299228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.299351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.299383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.299543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.299599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.299774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.299820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.299958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.300006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.300167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.300201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.300362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.300394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.300519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-19 03:16:43.300555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-19 03:16:43.300751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.300786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.300888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.300921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.301105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.301140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.301311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.301347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.301516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.301551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.301704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.301764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.301880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.301912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.302053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.302085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.302254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.302289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.302435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.302479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.302638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.302673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.302820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.302853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.302973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.303005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.303133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.303169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.303312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.303346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.303540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.303576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.303681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.303744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.303864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.303896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.304087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.304136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.304320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.304353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.304454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.304488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.304647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.304681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.304794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.304827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.304931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.304964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.305090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.305122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.305286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.305318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.305409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.305442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.305610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.305645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.305791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-19 03:16:43.305825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-19 03:16:43.305938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.305970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.306127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.306161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.306278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.306312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.306414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.306449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.306607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.306641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.306788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.306820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.306958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.306992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.307100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.307143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.307307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.307343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.307486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.307522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.307665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.307710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.307878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.307912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.308075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.308111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.308212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.308260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.308384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.308412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.308572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.308603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.308745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.308778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.308921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.308952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.309062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.309095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.309267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.309302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.309511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.309547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.309723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.309762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.309960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.309990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.310121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.310158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.310297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.310332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.310435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.310470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.310626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.310660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.310813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.310850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.310994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.311046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.311210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.311260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.311463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.311506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.311646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.311676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.311814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.311862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.312034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.312070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.312254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.312311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.312435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.312484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.312618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.312647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.312789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.312823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.312968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.312999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.313098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.313130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.313226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.313258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.313390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.313421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.313539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.313567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.313663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.313702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.313922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.313956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.314139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.314173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.314379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.314412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.314567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.314596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.314732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.314762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.314857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-19 03:16:43.314886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-19 03:16:43.315062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.315094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.315290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.315324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.315490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.315523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.315661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.315703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.315854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.315885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.316017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.316045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.316226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.316264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.316433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.316466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.316633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.316666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.316817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.316853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.317009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.317050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.317270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.317305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.317453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.317488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.317657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.317696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.317904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.317934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.318070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.318099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.318191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.318221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.318350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.318382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.318486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.318517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.318632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.318663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.318810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.318843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.318970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.319019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.319166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.319201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.319316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.319360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.319522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.319557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.319740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.319772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.319875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.319906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.320108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.320158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.320297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.320331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.320507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.320542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.320681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.320757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.320853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.320883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.320997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.321030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.322069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.322104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.322237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.322264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.322418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.322446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.322600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.322629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.322754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.322783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.322903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.322932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.323063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.323091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.323187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.323215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.323311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.323339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.323442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.323469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.323599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.323626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.323748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.323777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.323904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.323932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.324095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.324124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.324218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.324247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.324373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.324403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.324497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-19 03:16:43.324526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-19 03:16:43.324649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.324703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.324845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.324887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.325075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.325118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.325244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.325271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.325370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.325399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.325516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.325544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.325664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.325703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.325812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.325841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.325956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.325984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.326076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.326104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.326189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.326216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.326338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.326366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.326467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.326494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.326593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.326621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.326783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.326811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.326930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.326957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.327075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.327102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.327195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.327223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.327307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.327334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.327463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.327491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.327616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.327644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.327751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.327781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.327864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.327891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.327980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.328008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.328128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.328156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.328316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.328344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.328465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.328493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.328628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.328657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.328777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.328826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.328968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.328997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.329141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.329169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.329294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.329322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.329411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.329439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.329562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.329589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.329697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.329761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.329917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.329984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.330129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.330163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.330326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.330361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.330491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.330521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.330651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.330680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.330797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.330826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.330988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.331017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.331135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-19 03:16:43.331165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-19 03:16:43.331273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.331304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.331406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.331436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.331532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.331562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.331698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.331727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.331880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.331909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.331996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.332047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.332181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.332212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.332328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.332360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.332504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.332535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.332668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.332706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.332884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.332912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.333054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.333084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.333216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.333248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.333410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.333440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.333541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.333570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.333696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.333741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.333855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.333882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.333993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.334020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.334108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.334135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.334220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.334246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.334396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.334422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.334531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.334557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.334645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.334672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.334779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.334806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.334912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.334942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.335069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.335095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.335223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.335268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.335406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.335437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.335545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.335577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.335703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.335750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.335844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.335873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.336017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.336047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.336148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.336179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.336314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.336353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.336478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.336508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.336636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.336676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.336821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.336851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.336950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.336979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.337083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.337112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.337276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.337302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.337420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.337447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.337535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.337562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.337659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.337701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.337903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.337934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.338020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.338046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.338188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.338234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.338343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.338388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.338486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.338516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.338638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.338665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-19 03:16:43.338717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1950970 (9): Bad file descriptor 00:35:32.845 [2024-11-19 03:16:43.338857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-19 03:16:43.338899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.339027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.339061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.339180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.339207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.339348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.339380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.339538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.339569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.339678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.339747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.339892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.339924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.340040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.340074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.340187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.340234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.340391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.340423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.340572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.340598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.340725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.340765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.340878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.340904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.341005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.341034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.341157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.341185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.341289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.341316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.341447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.341480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.341602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.341653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.341810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.341842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.341964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.341992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.342126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.342152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.342242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.342268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.342346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.342372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.342491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.342545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.342634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.342662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.342792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.342831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.342953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.342996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.343121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.343168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.343333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.343373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.343518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.343552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.343678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.343722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.343862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.343888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.343988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.344016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.344102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.344129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.344215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.344266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.344409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.344443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.344593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.344624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.344787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.344814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.344899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.344927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.345050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.345075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.345170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.345196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.345316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.345342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.345443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.345470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.345603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.345631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.345738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.345766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.345882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.345909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.345993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.346041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.346204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.346237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.346377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.346444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.346575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.346627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.346729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-19 03:16:43.346756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-19 03:16:43.346843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.346869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.346987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.347013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.347130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.347156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.347270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.347297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.347435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.347475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.347638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.347667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.347781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.347807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.347903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.347928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.348010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.348035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.348118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.348143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.348226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.348251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.348370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.348401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.348520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.348551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.348679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.348737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.348853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.348895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.349040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.349076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.349184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.349212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.349361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.349416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.349500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.349539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.349650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.349681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.349781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.349810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.349917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.349954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.350135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.350183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.350353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.350389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.350556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.350594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.350711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.350759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.350858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.350887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.351042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.351075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.351224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.351258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.351375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.351425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.351557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.351585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.351685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.351720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.351808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.351835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.351977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.352028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.352117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.352146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.352277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.352322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.352456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.352484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.352615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.352642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.352789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.352836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.352928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.352955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.353057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.353087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.353286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.353333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.353469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.353496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.353628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.353656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.353775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.353809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.353913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.353944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.354142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.354183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.354299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.354336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.354449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.354477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.354596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.354623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-19 03:16:43.354716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-19 03:16:43.354744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.354837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.354866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.354976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.355005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.355143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.355190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.355299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.355333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.355481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.355514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.355632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.355663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.355808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.355849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.356008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.356039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.356151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.356198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.356417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.356457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.357507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.357560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.357701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.357750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.357853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.357897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.358068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.358097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.358208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.358238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.358338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.358367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.358469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.358502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.358648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.358679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.358862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.358922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.359077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.359124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.359280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.359325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.359472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.359504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.359649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.359681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.359818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.359850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.360004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.360043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.360183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.360234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.360352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.360381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.360566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.360600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.360718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.360768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.360882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.360912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.361049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.361078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.361173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.361214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.361341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.361371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.361582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.361623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.361789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.361817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.361949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.361979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.362102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.362131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.362232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.362262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.362399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.362428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.362588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.362618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.362717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.362762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.362852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.362898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.363041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-19 03:16:43.363072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-19 03:16:43.363195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.363226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.363353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.363388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.363509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.363559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.363682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.363717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.363874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.363902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.364040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.364069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.364193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.364237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.364352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.364400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.364553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.364581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.364737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.364766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.364911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.364944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.365052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.365080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.365212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.365240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.365414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.365461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.365607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.365653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.365817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.365850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.365994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.366044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.366183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.366242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.366381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.366431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.366537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.366568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.366723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.366753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.366847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.366876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.367002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.367030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.367121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.367150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.367266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.367296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.367435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.367467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.367592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.367623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.367751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.367779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.367873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.367903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.368081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.368110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.368278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.368313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.368439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.368469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.368597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.368627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.368753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.368781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.368877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.368907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.369023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.369052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.369185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.369215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.369319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.369349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.369500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.369533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.369649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.369676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.369804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.369833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.369917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.369944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.370119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.370149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.370257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.370300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.370442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.370494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.370660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.370696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.370809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.370837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.370985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.371015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.371135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.371178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.371325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-19 03:16:43.371373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-19 03:16:43.371473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.371503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.371656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.371683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.371817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.371845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.371936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.371966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.372100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.372130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.372255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.372286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.372379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.372409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.372547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.372582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.372706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.372755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.372843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.372873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.373013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.373040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.373195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.373224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.373360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.373390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.373502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.373529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.373635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.373665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.373819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.373849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.373988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.374038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.374177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.374227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.374369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.374416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.374510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.374538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.374636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.374666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.374824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.374850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.374961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.374997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.375123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.375149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.375270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.375296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.375411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.375445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.375559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.375587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.375668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.375736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.375859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.375885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.375992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.376017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.376088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.376114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.376215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.376242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-19 03:16:43.376352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-19 03:16:43.376377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.376466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.376504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.376603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.376636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.376766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.376793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.376881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.376907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.376996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.377022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.377121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.377151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.377271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.377318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.377410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.377442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.377534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.377564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.377715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.377760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.377849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.377875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.377963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.378007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.378222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.378252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.378406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.378449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.378569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.378616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.378875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.378905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.379018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.379048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.379192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.379223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.379357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.379387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.379478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.379509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.379630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.379657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.379788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.379816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.379899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.379926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.380028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.380055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.380166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.135 [2024-11-19 03:16:43.380196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.135 qpair failed and we were unable to recover it. 00:35:33.135 [2024-11-19 03:16:43.380335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.380364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.380526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.380553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.380673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.380708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.380848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.380875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.380966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.381011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.381115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.381145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.381273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.381302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.381416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.381445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.381573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.381601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.381683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.381732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.381812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.381838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.381928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.381963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.382047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.382080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.382235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.382264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.382466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.382495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.382641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.382669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.382772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.382800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.382905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.382941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.383062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.383098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.383255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.383305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.383402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.383432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.383529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.383557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.383681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.383725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.383832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.383860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.383996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.384039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.384210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.384243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.384406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.384438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.384571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.384600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.384778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.384819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.384942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.384990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.385111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.385140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.385316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.385367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.385523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.385553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.385647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.385677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.385792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.385819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.385933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.385985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.386075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.386105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.136 qpair failed and we were unable to recover it. 00:35:33.136 [2024-11-19 03:16:43.386272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.136 [2024-11-19 03:16:43.386302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.386465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.386513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.386630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.386659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.386808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.386835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.386920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.386948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.387089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.387120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.387269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.387326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.387446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.387474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.387563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.387591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.387682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.387731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.387813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.387839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.387923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.387950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.388089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.388116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.388206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.388236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.388358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.388386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.388479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.388512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.388604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.388632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.388832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.388858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.388947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.388983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.389084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.389110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.389226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.389253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.389388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.389440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.389610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.389638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.389776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.389803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.389894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.389920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.390063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.390093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.390204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.390249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.390396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.390439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.390576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.390606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.390699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.390749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.390863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.390888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.391058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.391103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.391220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.391246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.391392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.391447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.391624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.391651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.391784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.391812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.391901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.391926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.392039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.137 [2024-11-19 03:16:43.392065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.137 qpair failed and we were unable to recover it. 00:35:33.137 [2024-11-19 03:16:43.392169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.392200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.392304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.392332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.392451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.392482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.392643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.392674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.392807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.392834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.392956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.392999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.393121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.393150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.393295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.393345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.393429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.393465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.393610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.393640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.393784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.393811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.393962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.394004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.394136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.394163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.394274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.394299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.394438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.394470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.394585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.394630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.394785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.394813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.394889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.394915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.395007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.395033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.395115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.395141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.395256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.395284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.395366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.395392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.395530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.395560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.395714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.395759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.395851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.395877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.395966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.395992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.396091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.396123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.396279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.396315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.396455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.396525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.396641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.396669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.396786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.396812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.396891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.396916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.396993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.397036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.397165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.397194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.397330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.397362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.397477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.397514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.397667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.397708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.138 qpair failed and we were unable to recover it. 00:35:33.138 [2024-11-19 03:16:43.397810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.138 [2024-11-19 03:16:43.397837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.397943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.397969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.398047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.398090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.398240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.398289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.398410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.398446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.398592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.398639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.398781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.398809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.398918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.398944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.399060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.399087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.399226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.399272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.399394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.399423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.399549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.399578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.399708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.399768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.399859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.399886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.400035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.400084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.400228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.400277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.400390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.400440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.400612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.400637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.400758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.400784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.400867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.400893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.401066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.401113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.401265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.401315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.401483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.401533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.401628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.401657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.401786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.401814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.401930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.401966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.402129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.402157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.402308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.402354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.402476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.402503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.402595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.402624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.402719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.402748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.402876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.402904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.139 qpair failed and we were unable to recover it. 00:35:33.139 [2024-11-19 03:16:43.403054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.139 [2024-11-19 03:16:43.403097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.403231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.403262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.403359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.403387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.403529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.403557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.403658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.403687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.403828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.403855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.403940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.403989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.404157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.404186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.404392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.404421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.404509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.404539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.404758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.404784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.404903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.404931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.405080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.405108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.405271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.405323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.405469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.405517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.405653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.405701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.405824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.405850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.405942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.405969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.406090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.406140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.406353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.406391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.406529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.406559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.406683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.406719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.406858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.406884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.406984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.407013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.407192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.407235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.407362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.407390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.407528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.407557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.407651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.407680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.407806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.407832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.407949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.407974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.408061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.408087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.408188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.408217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.408350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.408392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.408526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.408565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.408713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.408758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.408852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.408879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.408970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.140 [2024-11-19 03:16:43.408996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.140 qpair failed and we were unable to recover it. 00:35:33.140 [2024-11-19 03:16:43.409135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.409161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.409276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.409302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.409390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.409418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.409560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.409601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.409765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.409804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.409995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.410047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.410167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.410219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.410369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.410424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.410570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.410599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.410707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.410749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.410859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.410888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.411012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.411059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.411237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.411286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.411429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.411477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.411630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.411658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.411821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.411852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.411981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.412019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.412205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.412236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.412375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.412424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.412547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.412579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.412751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.412781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.412881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.412925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.413083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.413113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.413261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.413318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.413529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.413580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.413757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.413787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.413915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.413959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.414104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.414150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.414283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.414331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.414452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.414480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.414603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.414631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.414765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.414794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.414875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.414903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.415065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.415108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.415212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.415242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.415359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.415387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.415482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.415512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.141 qpair failed and we were unable to recover it. 00:35:33.141 [2024-11-19 03:16:43.415642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.141 [2024-11-19 03:16:43.415672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.415814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.415859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.415944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.415974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.416127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.416175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.416268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.416298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.416459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.416487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.416611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.416639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.416749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.416791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.416919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.416966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.417097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.417128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.417216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.417246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.417395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.417430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.417545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.417575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.417744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.417776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.417926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.417959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.418106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.418163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.418265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.418295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.418489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.418517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.418636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.418664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.418766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.418796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.418903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.418933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.419061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.419091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.419238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.419275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.419465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.419500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.419617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.419647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.419763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.419791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.419950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.419985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.420107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.420138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.420305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.420342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.420462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.420509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.420619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.420647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.420788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.420834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.420964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.421011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.421190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.421243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.421342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.421372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.421511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.421538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.421636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.421663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.421795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.142 [2024-11-19 03:16:43.421823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.142 qpair failed and we were unable to recover it. 00:35:33.142 [2024-11-19 03:16:43.421942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.421970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.422090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.422118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.422228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.422256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.422381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.422409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.422496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.422524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.422619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.422647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.422771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.422802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.422929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.422956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.423077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.423104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.423197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.423225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.423305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.423332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.423472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.423514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.423643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.423672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.423825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.423855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.423957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.423987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.424145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.424184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.424305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.424335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.424502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.424530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.424658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.424685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.424825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.424852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.424944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.424972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.425106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.425133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.425252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.425280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.425376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.425404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.425518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.425559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.425702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.425733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.425823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.425851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.425946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.425974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.426123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.426150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.426278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.426307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.426433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.426484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.426650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.426678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.426808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.426839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.426937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.426965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.427126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.427163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.427323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.427357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.427523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.427561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.143 [2024-11-19 03:16:43.427685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.143 [2024-11-19 03:16:43.427746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.143 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.427863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.427891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.427989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.428037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.428197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.428235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.428391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.428428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.428564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.428596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.428755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.428784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.428898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.428926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.429021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.429050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.429225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.429270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.429401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.429431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.429564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.429607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.429727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.429755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.429877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.429904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.430023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.430071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.430222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.430259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.430376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.430420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.430578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.430609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.430779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.430812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.430996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.431045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.431161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.431211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.431359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.431408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.431531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.431560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.431681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.431736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.431850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.431878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.431965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.431992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.432081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.432126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.432289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.432330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.432486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.432515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.432698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.432736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.432827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.432856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.432975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.433012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.433209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.144 [2024-11-19 03:16:43.433246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.144 qpair failed and we were unable to recover it. 00:35:33.144 [2024-11-19 03:16:43.433395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.433432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.433595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.433636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.433769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.433799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.433921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.433948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.434098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.434144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.434258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.434303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.434394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.434423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.434515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.434546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.434665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.434714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.434839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.434868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.435038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.435083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.435221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.435251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.435394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.435445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.435546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.435574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.435661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.435696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.435805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.435834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.435974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.436025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.436208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.436257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.436378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.436431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.436557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.436587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.436705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.436746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.436897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.436945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.437114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.437155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.437287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.437315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.437485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.437523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.437639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.437666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.437778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.437807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.437945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.437975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.438133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.438162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.438302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.438338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.438453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.438492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.438670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.438706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.438827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.438855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.438952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.438997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.439182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.439212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.439399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.439450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.439608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.439637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.145 [2024-11-19 03:16:43.439775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.145 [2024-11-19 03:16:43.439803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.145 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.439930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.439957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.440142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.440199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.440343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.440383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.440622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.440706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.440844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.440872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.440959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.440987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.441107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.441135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.441250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.441278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.441400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.441468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.441590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.441620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.441703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.441732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.441835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.441864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.441953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.441982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.442120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.442147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.442274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.442311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.442446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.442492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.442644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.442675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.442860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.442890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.442995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.443024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.443176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.443204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.443298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.443326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.443517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.443585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.443715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.443746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.443898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.443926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.444071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.444121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.444312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.444364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.444528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.444568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.444715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.444747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.444895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.444924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.445055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.445085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.445186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.445216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.445354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.445420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.445567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.445596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.445752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.445780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.445949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.445996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.446177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.446230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.446371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.446419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.146 qpair failed and we were unable to recover it. 00:35:33.146 [2024-11-19 03:16:43.446537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.146 [2024-11-19 03:16:43.446565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.446669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.446741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.446888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.446933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.447091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.447130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.447317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.447363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.447569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.447598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.447715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.447765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.447927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.447957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.448057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.448102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.448235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.448265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.448456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.448486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.448633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.448678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.448817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.448846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.448985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.449015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.449157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.449209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.449350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.449401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.449591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.449641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.449779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.449807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.449912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.449941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.450088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.450135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.450234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.450264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.450382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.450411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.450524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.450585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.450731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.450762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.450874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.450919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.451027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.451057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.451161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.451189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.451312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.451340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.451459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.451487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.451606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.451635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.451746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.451789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.451894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.451923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.452053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.452094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.452248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.452277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.452375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.452403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.452520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.452549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.452681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.452717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.452860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.147 [2024-11-19 03:16:43.452908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.147 qpair failed and we were unable to recover it. 00:35:33.147 [2024-11-19 03:16:43.453022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.453072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.453220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.453268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.453356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.453384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.453466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.453494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.453588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.453617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.453713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.453761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.453894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.453930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.454125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.454164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.454288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.454328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.454462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.454501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.454685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.454723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.454875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.454904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.455037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.455084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.455276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.455326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.455449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.455477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.455596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.455624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.455770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.455816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.455957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.456003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.456143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.456192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.456343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.456370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.456463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.456491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.456618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.456647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.456756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.456802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.456934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.456964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.457095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.457124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.457289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.457327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.457516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.457555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.457703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.457752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.457874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.457904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.457996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.458026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.458155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.458186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.458327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.458391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.458586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.458638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.458832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.458885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.458975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.459036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.459208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.459260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.459346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.459374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-19 03:16:43.459466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-19 03:16:43.459494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.459619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.459647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.459781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.459809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.459902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.459930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.460048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.460076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.460195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.460224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.460349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.460379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.460519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.460561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.460661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.460697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.460812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.460841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.460941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.460970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.461123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.461151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.461289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.461343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.461503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.461544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.461713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.461748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.461876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.461906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.462033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.462062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.462207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.462246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.462426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.462475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.462567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.462597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.462738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.462780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.462886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.462916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.463086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.463136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.463313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.463344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.463538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.463566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.463715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.463746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.463861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.463907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.464008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.464047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.464203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.464260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.464406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.464455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.464570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.464600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.464757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.464799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.464891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.464937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-19 03:16:43.465094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-19 03:16:43.465143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.465293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.465346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.465475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.465504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.465620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.465654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.465776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.465805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.465917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.465947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.466065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.466093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.466240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.466287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.466374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.466402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.466562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.466604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.466716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.466758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.466860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.466889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.467046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.467086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.467273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.467312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.467471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.467510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.467660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.467697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.467824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.467852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.467949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.467977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.468079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.468127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.468304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.468359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.468542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.468600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.468798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.468829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.468943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.468997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.469128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.469180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.469379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.469420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.469536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.469564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.469698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.469731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.469850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.469878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.470049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.470077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.470245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.470284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.470413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.470454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.470578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.470607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.470757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.470800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.470896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.470925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.471042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.471094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.471213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.471264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.471411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.471457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-19 03:16:43.471547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-19 03:16:43.471576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.471698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.471727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.471823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.471852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.471943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.471971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.472075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.472103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.472233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.472261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.472383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.472410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.472541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.472569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.472698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.472730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.472825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.472854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.472973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.473001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.473089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.473117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.473215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.473244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.473331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.473359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.473509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.473536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.473627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.473656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.473806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.473849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.473976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.474005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.474150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.474202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.474311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.474349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.474544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.474571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.474660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.474695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.474819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.474847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.474966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.474998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.475173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.475223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.475361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.475415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.475537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.475565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.475682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.475715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.475818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.475863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.476004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.476052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.476199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.476227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.476345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.476373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.476503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.476531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.476628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.476664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.476802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.476843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.477005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.477035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.477192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.477220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.477337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.477364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-19 03:16:43.477514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-19 03:16:43.477543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.477666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.477700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.477800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.477827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.477943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.477987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.478090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.478117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.478277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.478336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.478542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.478583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.478744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.478773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.478867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.478895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.479028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.479058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.479188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.479218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.479338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.479367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.479471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.479516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.479710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.479747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.479896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.479924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.480067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.480097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.480251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.480281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.480424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.480476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.480566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.480596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.480761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.480803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.480898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.480927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.481068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.481118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.481294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.481324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.481514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.481562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.481725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.481773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.481925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.481954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.482041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.482069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.482213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.482253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.482385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.482438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.482568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.482607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.482769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.482799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.482944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.482990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.483126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.483176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.483318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.483367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.483513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.483541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.483672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.483715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.483860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.483891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.484018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-19 03:16:43.484063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-19 03:16:43.484238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.484277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.484495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.484534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.484700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.484746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.484865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.484893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.485005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.485060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.485177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.485229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.485392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.485431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.485563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.485593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.485703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.485755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.485901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.485943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.486108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.486136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.486331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.486361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.486486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.486527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.486733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.486779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.486889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.486919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.487060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.487090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.487236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.487277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.487433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.487483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.487614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.487646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.487763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.487792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.487910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.487942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.488077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.488122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.488271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.488323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.488488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.488528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.488711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.488770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.488901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.488931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.489041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.489071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.489229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.489278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.489441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.489484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.489643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.489671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.489805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.489835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.489934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.489979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.490069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-19 03:16:43.490098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-19 03:16:43.490187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.490218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.490377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.490418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.490579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.490620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.490760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.490808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.490946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.490997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.491149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.491206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.491362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.491416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.491507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.491534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.491658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.491686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.491837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.491883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.491974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.492001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.492091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.492119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.492200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.492228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.492346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.492374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.492521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.492548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.492669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.492707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.492839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.492880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.493002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.493032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.493156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.493184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.493306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.493333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.493462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.493490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.493621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.493664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.493827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.493861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.493957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.493988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.494116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.494146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.494247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.494279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.494416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.494447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.494618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.494647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.494791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.494820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.494948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.494994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.495095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.495128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.495242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.495295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.495508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.495558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.495657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.495686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.495814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.495841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.495984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.496011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.496123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.496178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-19 03:16:43.496390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-19 03:16:43.496419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.496579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.496611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.496733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.496763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.496894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.496922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.497047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.497076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.497192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.497219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.497410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.497451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.497608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.497638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.497786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.497816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.497948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.497989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.498168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.498200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.498361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.498411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.498558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.498585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.498711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.498739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.498908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.498952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.499109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.499158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.499259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.499298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.499393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.499420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.499555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.499598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.499739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.499772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.499913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.499946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.500153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.500206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.500358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.500404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.500549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.500591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.500700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.500730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.500859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.500889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.501033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.501077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.501214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.501244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.501373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.501441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.501586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.501615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.501741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.501772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.501889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.501917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.502059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.502108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.502278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.502327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.502485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.502526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.502705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.502757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.502850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.502878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.503006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-19 03:16:43.503035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-19 03:16:43.503160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.503190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.503355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.503393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.503520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.503547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.503717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.503747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.503842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.503871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.503996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.504026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.504112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.504159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.504386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.504425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.504546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.504575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.504665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.504702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.504860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.504888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.505054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.505095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.505259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.505299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.505423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.505475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.505628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.505658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.505792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.505821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.505953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.505998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.506133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.506179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.506331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.506380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.506461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.506489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.506577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.506608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.506736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.506766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.506885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.506914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.507029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.507062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.507186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.507215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.507337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.507365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.507486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.507514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.507652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.507704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.507848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.507877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.507996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.508024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.508143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.508171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.508290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.508319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.508438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.508469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.508615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.508644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.508774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.508816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.508965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.508997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.509183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.509238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.509393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-19 03:16:43.509441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-19 03:16:43.509569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.509599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.509742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.509773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.509903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.509947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.510063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.510113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.510299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.510346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.510439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.510466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.510593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.510624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.510721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.510749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.510846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.510873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.511020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.511049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.511211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.511251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.511394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.511434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.511598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.511627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.511759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.511800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.511897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.511926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.512071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.512122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.512316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.512368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.512615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.512657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.512809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.512840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.512965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.512994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.513120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.513148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.513234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.513262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.513383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.513425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.513597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.513638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.513791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.513821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.513962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.514010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.514175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.514225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.514412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.514462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.514550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.514578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.514742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.514775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.514917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.514946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.515049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.515077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.515200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.515229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.515351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.515378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.515498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.515528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.515649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.515677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.515785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.515813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-19 03:16:43.515935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-19 03:16:43.515963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.516114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.516142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.516267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.516296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.516410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.516438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.516555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.516583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.516703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.516732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.516901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.516931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.517105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.517135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.517263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.517310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.517479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.517520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.517715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.517743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.517846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.517873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.518003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.518033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.518225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.518266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.518438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.518480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.518637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.518669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.518816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.518857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.519012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.519043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.519167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.519219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.519334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.519380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.519502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.519532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.519669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.519729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.519853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.519881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.520030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.520061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.520189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.520220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.520338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.520368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.520502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.520532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.520630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.520662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.520795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.520837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.520972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.521002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.521148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.521201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.521310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.521367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.521513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.521541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.521661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-19 03:16:43.521697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-19 03:16:43.521845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.521887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.522042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.522102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.522238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.522293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.522493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.522557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.522748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.522778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.522897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.522927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.523062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.523107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.523272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.523340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.523532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.523561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.523663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.523699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.523817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.523845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.523946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.523975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.524071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.524100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.524278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.524309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.524536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.524601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.524844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.524873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.524969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.525016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.525204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.525235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.525343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.525373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.525594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.525622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.525756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.525800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.525908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.525956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.526098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.526145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.526282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.526328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.526489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.526541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.526687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.526736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.526874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.526920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.527072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.527101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.527288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.527317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.527437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.527467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.527578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.527621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.527758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.527801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.527922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.527952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.528048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.528096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.528335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.528390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.528578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.528634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.528760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.528788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-19 03:16:43.528883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-19 03:16:43.528928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.529074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.529127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.529258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.529290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.529468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.529519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.529679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.529717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.529857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.529888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.530121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.530220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.530481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.530544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.530759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.530790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.530916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.530946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.531067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.531111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.531290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.531355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.531586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.531637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.531787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.531817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.531897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.531925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.532009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.532038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.532123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.532152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.532394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.532447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.532561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.532589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.532731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.532774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.532877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.532908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.532999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.533028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.533147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.533176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.533395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.533460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.533626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.533655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.533758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.533787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.533906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.533935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.534045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.534076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.534288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.534353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.534525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.534591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.534751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.534781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.534903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.534932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.535145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.535203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.535450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.535514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.535704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.535752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.535866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.535895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.535986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.536014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-19 03:16:43.536106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-19 03:16:43.536135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.536383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.536448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.536651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.536681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.536780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.536809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.536927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.536970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.537130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.537160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.537366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.537432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.537667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.537758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.537862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.537890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.538015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.538060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.538181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.538212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.538344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.538374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.538510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.538567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.538699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.538744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.538868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.538903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.539045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.539099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.539340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.539395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.539548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.539591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.539722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.539753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.539886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.539933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.540034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.540064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.540180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.540208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.540301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.540330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.540427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.540457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.540584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.540616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.540713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.540743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.540862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.540890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.540983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.541012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.541156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.541187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.541352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.541418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.541605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.541636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.541786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.541834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.541982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.542027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.542197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.542252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.542378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.542442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.542538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.542569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.542719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.542748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.542866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.542895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-19 03:16:43.542980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-19 03:16:43.543025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.543183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.543213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.543313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.543344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.543464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.543508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.543645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.543675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.543799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.543831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.544085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.544150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.544413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.544479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.544686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.544725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.544817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.544846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.544930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.544975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.545117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.545149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.545264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.545326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.545450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.545478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.545628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.545657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.545755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.545784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.545871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.545906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.546028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.546058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.546137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.546165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.546300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.546342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.546464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.546494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.546620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.546650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.546792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.546838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.546992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.547059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.547293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.547359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.547551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.547619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.547808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.547837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.547920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.547967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.548091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.548121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.548369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.548437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.548664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.548707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.548851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.548883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.549072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.549132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.549228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.549259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.549402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.549456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.549626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.549655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.549776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.549805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.549897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-19 03:16:43.549926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-19 03:16:43.550017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.550047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.550136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.550164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.550312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.550341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.550434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.550463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.550584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.550614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.550705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.550736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.550879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.550908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.550991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.551020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.551141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.551170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.551297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.551363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.551566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.551611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.551746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.551779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.551924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.551970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.552218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.552288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.552583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.552638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.552740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.552770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.552954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.553006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.553153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.553210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.553399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.553450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.553544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.553573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.553674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.553712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.553885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.553943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.554121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.554175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.554338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.554388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.554506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.554535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.554721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.554769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.554967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.555040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.555301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.555368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.555597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.555662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.555885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.555941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.556030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.556059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.556145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-19 03:16:43.556174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-19 03:16:43.556383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.556455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.556738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.556809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.557082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.557152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.557486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.557551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.557759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.557793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.557945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.557974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.558172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.558238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.558530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.558596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.558837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.558868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.558973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.559003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.559118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.559147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.559266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.559296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.559468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.559534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.559779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.559815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.559945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.559975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.560066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.560098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.560286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.560344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.560469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.560497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.560581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.560610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.560729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.560759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.560888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.560916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.561040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.561068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.561239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.561297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.561394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.561423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.561549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.561577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.561700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.561729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.561828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.561857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.562043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.562114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.562341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.562408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.562662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.562698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.562796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.562826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.562928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.562957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.563103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.563132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.563302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.563360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.563493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.563536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.563669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.563709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.563816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.563845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-19 03:16:43.563997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-19 03:16:43.564025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.564309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.564373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.564567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.564598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.564758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.564788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.564878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.564938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.565184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.565249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.565438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.565504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.565767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.565797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.565919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.565948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.566160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.566226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.566491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.566555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.566819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.566848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.566974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.567003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.567125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.567154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.567307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.567372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.567713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.567765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.567909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.567942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.568112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.568178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.568474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.568539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.568763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.568793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.568944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.568973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.569125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.569191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.569435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.569464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.569723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.569780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.569928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.569956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.570056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.570084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.570199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.570228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.570409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.570479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.570714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.570763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.570918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.570947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.571072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.571100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.571223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.571253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.571421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.571449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.571570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.571602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.571711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.571741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.571862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.571891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.571980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.572009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.572154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-19 03:16:43.572183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-19 03:16:43.572308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.572350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.572501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.572531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.572625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.572653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.572787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.572817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.572963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.572992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.573080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.573109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.573254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.573282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.573493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.573562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.573811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.573841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.573998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.574064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.574354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.574419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.574675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.574752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.574875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.574954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.575206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.575274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.575568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.575633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.575847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.575879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.576059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.576088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.576238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.576267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.576410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.576443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.576576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.576619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.576752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.576784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.576906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.576936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.577204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.577269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.577558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.577623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.577890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.577957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.578290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.578354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.578654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.578746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.578840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.578869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.579032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.579098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.579341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.579408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.579622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.579714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.579856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.579927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.580203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.580273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.580538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.580605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.580828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.580857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.581027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.581092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.581386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.581451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-19 03:16:43.581634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-19 03:16:43.581662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.581771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.581799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.581895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.581923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.582039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.582068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.582217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.582294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.582605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.582669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.582878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.582907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.583053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.583120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.583399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.583466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.583678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.583719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.583819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.583849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.583945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.584012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.584232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.584296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.584549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.584615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.584898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.584969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.585265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.585330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.585636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.585723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.586020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.586086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.586333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.586400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.586660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.586746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.587036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.587101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.587356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.587431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.587732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.587799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.588098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.588164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.588417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.588481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.588682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.588762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.589054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.589119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.589389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.589454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.589716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.589782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.589993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.590059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.590300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.590364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.590562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.590627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.590944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.591009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.591214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-19 03:16:43.591279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-19 03:16:43.591585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.591651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.591951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.592019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.592275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.592340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.592587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.592654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.592941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.593009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.593305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.593370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.593660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.593747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.594044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.594108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.594355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.594423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.594726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.594793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.595021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.595085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.595335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.595402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.595655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.595751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.596045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.596110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.596375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.596441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.596685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.596766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.597049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.597114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.597391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.597457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.597758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.597824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.598117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.598182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.598475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.598538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.598832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.598898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.599197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.599262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.599454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.599518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.599777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.599843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.600135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.600199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.600450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.600517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.600765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.600843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.601101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.601166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.601376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.601442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.601716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.601783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.602083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.602148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.602436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.602501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.602786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.602852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.603140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.603205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.603467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.603532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.603753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.603819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-19 03:16:43.604063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-19 03:16:43.604129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.604417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.604482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.604770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.604836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.605045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.605112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.605416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.605480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.605726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.605792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.606081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.606147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.606390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.606455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.606755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.606823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.607123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.607189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.607478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.607543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.607838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.607904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.608157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.608222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.608498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.608563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.608763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.608828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.609091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.609156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.609407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.609472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.609732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.609801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.610074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.610139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.610375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.610441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.610668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.610759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.610981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.611045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.611265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.611332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.611532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.611599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.611886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.611953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.612223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.612288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.612544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.612610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.612862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.612928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.613231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.613296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.613583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.613647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.613909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.613986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.614292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.614357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.614608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.614672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.614955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.615021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.615275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.615342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.615584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.615649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.615897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.615963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.616208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-19 03:16:43.616273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-19 03:16:43.616470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.616534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.616822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.616886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.617183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.617245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.617477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.617538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.617834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.617897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.618097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.618162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.618464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.618530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.618831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.618897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.619100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.619164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.619466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.619531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.619784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.619851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.620101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.620169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.620418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.620485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.620781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.620848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.621054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.621118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.621407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.621472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.621774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.621840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.622100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.622167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.622464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.622529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.622817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.622889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.623222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.623297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.623571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.623640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.623951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.624017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.624277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.624342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.624632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.624717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.624986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.625052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.625316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.625380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.625626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.625711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.626015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.626080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.626375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.626440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.626742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.626809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.627053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.627119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.627415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.627492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.627755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.627821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.628067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.628134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.628362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.628427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.628634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.628718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-19 03:16:43.628978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-19 03:16:43.629042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.629274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.629338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.629637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.629720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.630031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.630095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.630397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.630462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.630778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.630846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.631134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.631198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.631461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.631528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.631773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.631839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.632138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.632203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.632416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.632481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.632770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.632837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.633132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.633197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.633501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.633565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.633882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.633965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.634206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.634274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.634568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.634634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.634960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.635026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.635285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.635350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.635642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.635744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.636017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.636084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.636379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.636444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.636751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.636826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.637136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.637203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.637434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.637502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.637765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.637832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.638023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.638090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.638303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.638370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.638657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.638737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.638942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.639008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.639308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.639373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.639585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.639653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.639987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.640052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.640351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.640415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.640642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.640728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.641020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.641096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.641396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.641461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.641722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-19 03:16:43.641788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-19 03:16:43.642082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.642147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.642442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.642506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.642804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.642870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.643112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.643177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.643409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.643473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.643775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.643844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.644144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.644208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.644494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.644559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.644829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.644896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.645146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.645213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.645512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.645576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.645873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.645939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.646188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.646254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.646535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.646599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.646881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.646947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.647234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.647300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.647587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.647652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.647951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.648016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.648267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.648332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.648590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.648654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.648975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.649041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.649293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.649358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.649646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.649729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.649931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.649997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.650300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.650364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.650617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.650684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.650976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.651042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.651328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.651393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.651711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.651778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.651968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.652034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.652277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.652341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.652543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.652608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.652878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-19 03:16:43.652943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-19 03:16:43.653193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.653258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.653496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.653561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.653815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.653881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.654135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.654201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.654494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.654568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.654774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.654841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.655138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.655203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.655512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.655577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.655832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.655901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.656162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.656229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.656523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.656587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.656826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.656892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.657178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.657243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.657469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.657533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.657826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.657893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.658185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.658249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.658495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.658562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.658775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.658841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.659147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.659212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.659454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.659521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.659811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.659878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.660164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.660228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.660486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.660550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.660826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.660891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.661145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.661210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.661406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.661474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.661730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.661797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.662084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.662148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.662440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.662505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.662716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.662804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.663108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.663173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.663445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.663511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.663766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.663831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.664081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.664145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.664399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.664463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.664724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.664794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.665041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.665106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-19 03:16:43.665361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-19 03:16:43.665427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.665709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.665777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.666073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.666137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.666382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.666450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.666717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.666787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.667077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.667142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.667396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.667461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.667747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.667825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.668072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.668138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.668450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.668515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.668721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.668788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.669087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.669152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.669401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.669468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.669757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.669824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.670065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.670131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.670353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.670419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.670667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.670749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.671015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.671081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.671333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.671398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.671639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.671740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.671991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.672055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.672331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.672397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.672720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.672785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.673026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.673091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.673392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.673456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.673757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.673824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.674139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.674203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.674501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.674566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.674820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.674887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.675171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.675235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.675525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.675590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.675889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.675955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.676211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.676278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.676530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.676595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.676861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.676927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.677170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.677236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.677484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.677551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.677772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-19 03:16:43.677838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-19 03:16:43.678079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.678144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.678418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.678482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.678774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.678841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.679097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.679162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.679418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.679482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.679747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.679816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.680111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.680176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.680470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.680536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.680781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.680851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.681043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.681120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.681322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.681389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.681673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.681758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.682053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.682118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.682418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.682482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.682734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.682803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.683072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.683137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.683439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.683504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.683807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.683873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.684070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.684137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.684398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.684463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.684718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.684784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.685083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.685149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.685446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.685510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.685729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.685795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.686084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.686150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.686400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.686464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.686721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.686787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.687059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.687123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.687367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.687433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.687743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.687809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.688098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.688163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.688462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.688527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.688831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.688897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.689141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.689206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.689409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.689475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.689765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.689832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.690058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.690122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.690383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.690447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.690752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.690819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.691079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.691146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.691403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-19 03:16:43.691467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-19 03:16:43.691773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.691840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.692134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.692199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.692445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.692512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.692802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.692869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.693071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.693137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.693404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.693469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.693678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.693760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.694053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.694118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.694364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.694440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.694722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.694788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.695044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.695109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.695307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.695373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.695665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.695768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.696065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.696130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.696414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.696478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.696787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.696853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.697150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.697216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.697520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.697585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.697890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.697957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.698243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.698309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.698598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.698662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.698888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.698956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.699257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.699323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.699565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.699631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.699919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.699985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.700270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.700335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.700579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.700644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.700901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.700966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.701246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.701311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.701565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.701628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.701901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.701967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.702220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.702285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.702507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.702573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.702870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.702936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.703181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.703246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.703549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-19 03:16:43.703614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-19 03:16:43.703917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.703984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.704225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.704289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.704598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.704662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.704971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.705037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.705279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.705343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.705593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.705660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.705931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.705997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.706217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.706281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.706586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.706651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.706928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.706993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.707246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.707310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.707561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.707627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.707891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.707958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.708262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.708326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.708580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.708647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.708972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.709037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.709286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.709350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.709641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.709726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.710022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.710087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.710345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.710411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.710667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.710749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.711012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.711077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.711314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.711380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.711612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.711677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.712000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.712065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.712313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.712380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.712659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.712748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.713004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.713069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.713316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.713383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.713672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.713755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.713999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.714063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.714315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.714380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.714623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.714711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.714980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.715044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.715288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.715353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.715542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.715609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.715905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.715971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.716228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.716294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.716548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.716611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.716875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.716953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-19 03:16:43.717251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-19 03:16:43.717318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.717565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.717629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.717841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.717909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.718204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.718269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.718557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.718622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.718931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.718997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.719298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.719362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.719653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.719745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.719999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.720064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.720356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.720421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.720671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.720756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.721016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.721080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.721323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.721388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.721703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.721770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.722066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.722131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.722414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.722480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.722728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.722796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.722987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.723063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.723346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.723409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.723663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.723745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.723998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.724066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.724311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.724377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.724641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.724721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.725020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.725084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.725327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.725393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.725592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.725657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.725984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.726051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.726296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.726361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.726555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.726620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.726930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.726997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-19 03:16:43.727294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-19 03:16:43.727360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.727618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.727684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.727974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.728040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.728242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.728311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.728557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.728624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.728856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.728927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.729222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.729290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.729553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.729621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.729875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.729942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.730239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.730315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.730615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.730682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.730972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.731043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.731292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.731358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.731603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.731670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.732001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.732067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.732329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.732399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.732725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.732795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.733050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.733117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.733383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.733449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.733645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.733732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.734027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.734091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.734336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.734401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.734669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.734753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.735059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.735125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.735376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.735441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.735648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.735735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.736048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.736113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.736359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.736424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.736626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.736725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.737037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.737101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.737391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.737457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.737720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.737789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.738093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.738157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.738450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.738515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.738741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.738809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.739057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.739124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.739421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.739485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.457 [2024-11-19 03:16:43.739673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.457 [2024-11-19 03:16:43.739754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.457 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.740057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.740122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.740333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.740397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.740685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.740765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.741051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.741117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.741334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.741401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.741718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.741784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.742044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.742109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.742355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.742423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.742654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.742737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.743026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.743092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.743386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.743451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.743743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.743819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.744125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.744190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.744486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.744551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.744818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.744885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.745147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.745213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.745497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.745561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.745813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.745879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.746086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.746152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.746447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.746511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.746825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.746891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.747142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.747206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.747503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.747568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.747816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.747882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.748132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.748197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.748461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.748528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.748717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.748783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.749030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.749095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.749358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.749422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.749684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.749779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.750075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.750139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.750378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.750444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.750734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.750800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.751061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.751126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.751412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.751476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.751771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.751836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.752096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.752160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.752411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.752477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.752754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.752821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.753112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.753177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.458 qpair failed and we were unable to recover it. 00:35:33.458 [2024-11-19 03:16:43.753471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.458 [2024-11-19 03:16:43.753536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.753845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.753911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.754122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.754187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.754475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.754540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.754839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.754905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.755196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.755261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.755558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.755623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.755952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.756017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.756272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.756340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.756584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.756649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.756923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.756989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.757229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.757304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.757552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.757618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.757896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.757962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.758249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.758315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.758611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.758675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.758889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.758955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.759211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.759275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.759530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.759594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.759908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.759973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.760217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.760284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.760581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.760646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.760971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.761037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.761253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.761318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.761608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.761672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.761994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.762060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.762325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.762390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.762601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.762667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.762933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.762998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.763192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.763259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.763549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.763613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.763900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.763966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.764229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.764294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.764546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.764611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.764914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.764980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.765274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.765340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.765636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.765719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.766023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.766088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.766300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.766368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.766658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.766761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.459 [2024-11-19 03:16:43.766987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.459 [2024-11-19 03:16:43.767055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.459 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.767360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.767426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.767679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.767763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.767995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.768059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.768286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.768351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.768647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.768744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.769009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.769073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.769385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.769450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.769750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.769816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.770117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.770181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.770443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.770509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.770799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.770875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.771160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.771224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.771536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.771602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.771884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.771950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.772134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.772198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.772439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.772503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.772797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.772864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.773094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.773159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.773449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.773512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.773766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.773833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.774083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.774149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.774414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.774479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.774733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.774800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.775057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.775123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.775378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.775442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.775747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.775814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.776102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.776167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.776379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.776444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.776736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.776803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.777050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.777116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.777338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.777402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.777660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.777749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.778052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.778117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.778373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.778439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.778686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.778774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.779040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.779105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.779397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.779462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.779745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.779812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.780103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.780168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.780385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.460 [2024-11-19 03:16:43.780453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.460 qpair failed and we were unable to recover it. 00:35:33.460 [2024-11-19 03:16:43.780750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.780816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.781024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.781088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.781385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.781450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.781716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.781783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.782071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.782136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.782347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.782412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.782619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.782685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.782944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.783008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.783220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.783285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.783495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.783563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.783814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.783891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.784187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.784251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.784512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.784577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.784843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.784911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.785123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.785189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.785438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.785503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.785759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.785826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.786020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.786085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.786297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.786362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.786637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.786718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.786957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.787023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.787233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.787298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.787491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.787556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.787810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.787877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.788090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.788155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.788417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.788484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.788733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.788801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.789078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.789144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.789391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.789457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.789749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.789815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.790070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.790136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.790398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.790462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.461 qpair failed and we were unable to recover it. 00:35:33.461 [2024-11-19 03:16:43.790677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.461 [2024-11-19 03:16:43.790759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.791012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.791078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.791326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.791392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.791610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.791674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.791974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.792041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.792275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.792340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.792574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.792639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.792978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.793045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.793315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.793380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.793629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.793714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.793976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.794041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.794334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.794398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.794714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.794781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.794978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.795043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.795267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.795332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.795585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.795650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.795898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.795964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.796231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.796296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.796542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.796617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.796888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.796954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.797163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.797230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.797494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.797559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.797817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.797885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.798112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.798181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.798463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.798531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.798835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.798901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.799197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.799262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.799479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.799555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.799824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.799890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.800071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.800136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.800345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.800410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.800667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.800749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.801058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.801123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.801410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.801476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.801753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.801820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.802068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.802133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.802315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.802379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.802614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.802679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.802995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.803061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.803273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.803339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.803589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.462 [2024-11-19 03:16:43.803654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.462 qpair failed and we were unable to recover it. 00:35:33.462 [2024-11-19 03:16:43.803966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.804032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.804290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.804355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.804546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.804611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.804876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.804945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.805247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.805312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.805612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.805676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.805891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.805957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.806191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.806255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.806490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.806555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.806853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.806919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.807132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.807198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.807441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.807508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.807757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.807825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.808022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.808087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.808343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.808409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.808656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.808736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.809031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.809095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.809305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.809382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.809640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.809721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.809946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.810012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.810264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.810328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.810571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.810637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.810902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.810970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.811262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.811326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.811633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.811716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.812012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.812078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.812305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.812370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.812617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.812682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.812957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.813022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.813306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.813371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.813622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.813707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.814025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.814091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.814333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.814397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.814660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.814745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.814998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.815066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.815359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.815423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.815665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.815751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.815953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.816019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.816280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.816346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.816640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.816737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.463 [2024-11-19 03:16:43.817000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.463 [2024-11-19 03:16:43.817066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.463 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.817356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.817420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.817726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.817793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.818044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.818108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.818403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.818468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.818679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.818763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.819052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.819117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.819402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.819467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.819717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.819783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.820036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.820101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.820391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.820456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.820650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.820745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.821036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.821101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.821394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.821458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.821764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.821830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.822080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.822145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.822438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.822504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.822797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.822873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.823159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.823224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.823520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.823584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.823811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.823879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.824170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.824234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.824442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.824506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.824795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.824861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.825116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.825185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.825435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.825501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.825811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.825878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.826138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.826203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.826455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.826520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.826812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.826877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.827171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.827235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.827498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.827564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.827810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.827876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.828175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.828240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.828489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.828553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.828842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.828908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.829207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.829271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.829464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.829529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.829728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.829797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.830086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.830151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.830386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.830451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-19 03:16:43.830719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-19 03:16:43.830785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.831085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.831150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.831436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.831501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.831739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.831807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.832022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.832087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.832276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.832340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.832594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.832659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.832894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.832961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.833256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.833322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.833513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.833578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.833838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.833905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.834101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.834166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.834411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.834477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.834766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.834834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.835124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.835189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.835484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.835550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.835775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.835853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.836070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.836137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.836395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.836460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.836718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.836784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.837031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.837097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.837349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.837413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.837712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.837779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.837974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.838039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.838310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.838374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.838611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.838675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.838994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.839059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.839365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.839428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.839683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.839770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.840017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.840084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.840326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.840391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.840678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.840776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.841025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.841090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.841338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.841403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.841651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.841736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.842041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-19 03:16:43.842106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-19 03:16:43.842352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.842418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.842670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.842753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.843044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.843108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.843357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.843422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.843731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.843797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.844082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.844148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.844402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.844467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.844746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.844813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.845057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.845122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.845369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.845436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.845733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.845800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.846061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.846127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.846417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.846482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.846704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.846771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.847062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.847128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.847378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.847442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.847757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.847825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.848115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.848180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.848467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.848530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.848825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.848891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.849146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.849222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.849467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.849532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.849834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.849900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.850194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.850260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.850516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.850580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.850821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.850887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.851127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.851192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.851391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.851457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.851686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.851768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.852017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.852084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.852388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.852453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.852759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.852826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.853074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.853140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.853437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.853502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.853753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.853820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.854062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.854127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.854398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.854463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.854739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.854806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.855067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.855131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.855379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.855445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-19 03:16:43.855747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-19 03:16:43.855814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.856113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.856177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.856464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.856529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.856760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.856827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.857071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.857137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.857333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.857398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.857643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.857723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.858023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.858089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.858289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.858354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.858609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.858674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.858952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.859017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.859314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.859379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.859653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.859734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.859998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.860063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.860315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.860380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.860637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.860732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.860994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.861061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.861355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.861419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.861684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.861770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.862029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.862095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.862383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.862457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.862718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.862787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.863088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.863154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.863398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.863464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.863724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.863790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.864081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.864145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.864433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.864498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.864739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.864807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.865109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.865174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.865420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.865485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.865776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.865842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.866088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.866151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.866448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.866512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.866813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.866878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.867179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.867243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.867492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.867558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.867856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.867922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.868223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.868287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.868595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.868660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.868903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.868971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-19 03:16:43.869226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-19 03:16:43.869290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.869488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.869552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.869802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.869869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.870162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.870227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.870444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.870509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.870778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.870845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.871140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.871204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.871405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.871472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.871735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.871802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.872017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.872083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.872279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.872346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.872591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.872656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.872941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.873008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.873265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.873333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.873588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.873654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.873964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.874030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.874327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.874392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.874709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.874775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.875067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.875131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.875381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.875445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.875650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.875747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.875999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.876065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.876292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.876359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.876573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.876637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.876894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.876960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.877258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.877324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.877590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.877654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.877973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.878040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.878301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.878366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.878599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.878664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.878940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.879004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.879263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.879328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.879564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.879629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.879944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.880011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.880310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.880377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.880672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.880766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.881060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.881125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.881371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.881436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.881686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.881770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.882026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.882091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.882392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.882456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-19 03:16:43.882758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-19 03:16:43.882825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.883108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.883174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.883429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.883494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.883799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.883865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.884113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.884178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.884486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.884552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.884812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.884889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.885190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.885255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.885473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.885540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.885832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.885899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.886162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.886227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.886522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.886585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.886890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.886956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.887254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.887319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.887517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.887584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.887813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.887881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.888178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.888243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.888533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.888597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.888896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.888962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.889259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.889324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.889586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.889653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.889944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.890010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.890308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.890373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.890663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.890747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.890965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.891032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.891332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.891397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.891686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.891769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.892031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.892097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.892387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.892451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.892747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.892814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.893098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.893162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.893451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.893516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.893808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.893875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.894179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.894244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.894492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.894557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.894816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.894882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.895148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-19 03:16:43.895212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-19 03:16:43.895503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.895567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.895822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.895890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.896166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.896230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.896431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.896495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.896716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.896784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.897031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.897097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.897340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.897405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.897725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.897791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.898080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.898145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.898438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.898513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.898748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.898814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.899095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.899161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.899451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.899517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.899814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.899879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.900170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.900235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.900514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.900579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.900843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.900909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.901199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.901265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.901465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.901531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.901750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.901816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.902106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.902171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.902461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.902527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.902764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.902831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.903138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.903204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.903491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.903556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.903852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.903918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.904207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.904272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.904534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.904600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.904913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.904980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.905287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.905352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.905649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.905732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.905984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.906051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.906347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.906412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.906721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.906787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.907072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.907138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.907445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.907510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.907785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.907852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.908057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.908121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.908310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-19 03:16:43.908376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-19 03:16:43.908686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.908773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.909006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.909071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.909318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.909385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.909641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.909723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.909978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.910044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.910291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.910357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.910647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.910727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.910983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.911048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.911342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.911407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.911662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.911747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.911951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.912033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.912279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.912344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.912648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.912731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.913033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.913097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.913343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.913408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.913586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.913651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.913904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.913971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.914236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.914301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.914603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.914668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.914943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.915009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.915260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.915327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.915576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.915642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.915913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.915980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.916278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.916343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.916611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.916676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.916942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.917008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.917239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.917304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.917550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.917617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.917898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.917963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.918206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.918273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.918524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.918589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.918853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.918919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.919241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.919306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.919595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.919660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.919971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.920036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.920287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.920352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.920598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.920663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.920925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.920990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.921281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.921345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.921600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.921665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-19 03:16:43.922003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-19 03:16:43.922068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.922333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.922397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.922707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.922773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.923015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.923080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.923381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.923446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.923707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.923774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.924016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.924081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.924365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.924429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.924723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.924791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.925057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.925124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.925387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.925462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.925738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.925806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.926067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.926132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.926423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.926487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.926741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.926806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.927055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.927122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.927376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.927441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.927733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.927798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.928101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.928165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.928458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.928529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.928822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.928888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.929134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.929199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.929491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.929556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.929790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.929857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.930162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.930228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.930421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.930486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.930678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.930758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.931047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.931111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.931354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.931420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.931676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.931756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.932009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.932074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.932321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.932387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.932672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.932755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.933058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.933123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.933371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.933436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.933725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.933791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.934079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.934145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.934464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.934531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.934776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.934843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.935115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.935180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.935475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-19 03:16:43.935541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-19 03:16:43.935736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.935803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.936054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.936119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.936362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.936397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.936564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.936620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.936910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.936978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.937241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.937277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.937459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.937524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.937815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.937852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.938007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.938042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.938157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.938199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.938318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.938354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.938485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.938521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.938671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.938718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.938850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.938884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.939028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.939061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.939170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.939205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.939358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.939393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.939534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.939568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.939709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.939761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.939879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.939916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.940086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.940121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.940232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.940266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.940367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.940400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.940555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.940589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.940764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.940800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.940933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.940967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.941119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.941154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.941290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.941324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.941466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.941516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.941668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.941717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.941882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.941916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.942063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.942100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.942234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.942285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.942439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.942477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.942595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.942632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.942806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.942842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.942955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.943014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.943110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.943145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.943353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.943419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.943666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.943765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-19 03:16:43.943875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-19 03:16:43.943910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.944030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.944065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.944206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.944240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.944500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.944535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.944788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.944823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.944970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.945005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.945115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.945148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.945317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.945381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.945672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.945747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.945887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.945921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.946101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.946155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.946416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.946479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.946731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.946782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.946914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.946967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.947137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.947189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.947343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.947379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.947650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.947719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.947878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.947913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.948015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.948049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.948167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.948201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.948340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.948374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.948567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.948602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.948745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.948781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.948921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.948963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.949121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.949175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.949606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.949675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.949873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.949909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.950089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.950155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.950417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.950453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.950755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.950790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.950897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.950931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.951151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.951219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.951473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.951541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.951753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.951789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.951906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.951942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.952201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.952267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.952536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-19 03:16:43.952603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-19 03:16:43.952821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.952858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.952976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.953029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.953259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.953324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.953612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.953677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.953873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.953908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.954101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.954166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.954424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.954489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.954681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.954755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.954891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.954943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.955213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.955272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.955384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.955422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.955599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.955636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.955774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.955810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.955944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.955978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.956221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.956280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.956468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.956528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.956756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.956791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.956943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.957006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.957267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.957332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.957546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.957613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.957848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.957883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.958001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.958036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.958152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.958186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.958396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.958430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.958626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.958730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.958913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.958966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.959246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.959336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.959597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.959668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.959874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.959910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.960017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.960052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.960200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.960234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.960408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.960477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.960723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.960792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.960944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.960981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.961152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.961187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.961304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.961340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.961485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.961520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.961733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.961768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.961978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.962050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.962304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.962373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-19 03:16:43.962613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-19 03:16:43.962650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.962774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.962811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.962917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.962953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.963141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.963177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.963405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.963472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.963717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.963779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.963928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.963964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.964203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.964264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.964380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.964416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.964568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.964603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.964779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.964848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.965099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.965157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.965305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.965363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.965518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.965556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.965736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.965773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.965927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.965963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.966175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.966213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.966394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.966430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.966532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.966568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.966785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.966843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.967001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.967060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.967163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.967199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.967342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.967378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.967515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.967551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.967701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.967738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.967877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.967914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.968090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.968133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.968279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.968315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.968432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.968469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.968633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.968696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.968851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.968890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.969006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.969044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.969172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.969209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.969367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.969421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.969552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.969589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.969736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.969775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.969881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.969917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.970118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.970183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.970425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.970482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.970629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.970666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.970893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-19 03:16:43.970990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-19 03:16:43.971255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.971324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.971577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.971645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.971950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.972019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.972246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.972326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.972540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.972576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.972725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.972764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.972874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.972951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.973166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.973247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.973441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.973509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.973702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.973739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.973881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.973949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.974274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.974339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.974645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.974742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.974874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.974938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.975225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.975290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.975486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.975555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.975815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.975852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.976001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.976038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.976178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.976215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.976366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.976402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.976654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.976727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.976858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.976895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.977127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.977186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.977436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.977493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.977616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.977652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.977841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.977883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.978086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.978147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.978350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.978403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.978558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.978593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.978798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.978863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.979034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.979099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.979273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.979340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.979516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.979552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.979673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.979718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.979904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.979960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.980155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.980218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.980364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.980400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.980539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.980575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.980749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.980821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.980968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.981038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.981218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-19 03:16:43.981254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-19 03:16:43.981409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.981445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.981590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.981628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.981801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.981856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.982011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.982050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.982193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.982229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.982403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.982469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.982684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.982777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.982932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.982997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.983233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.983298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.983591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.983656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.983871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.983908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.984100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.984169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.984379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.984440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.984623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.984659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.984872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.984938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.985192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.985250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.985450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.985510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.985695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.985732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.985931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.985988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.986181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.986250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.986475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.986531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.986673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.986720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.986862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.986920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.987180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.987278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.987513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.987598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.987859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.987897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.988062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.988130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.988382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.988449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.988739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.988776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.989033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.989093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.989301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.989359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.989561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.989619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.989738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.989775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.989973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.990038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.990213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.990284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-19 03:16:43.990467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-19 03:16:43.990523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.990669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.990713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.990959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.991029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.991153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.991190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.991435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.991490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.991599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.991635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.991843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.991912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.992125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.992181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.992427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.992485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.992601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.992638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.992898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.992956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.993208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.993263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.993439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.993475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.993626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.993661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.993829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.993865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.994008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.994045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.994235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.994271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.994418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.994455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.994621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.994675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.994851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.994890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.995110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.995176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.995507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.995573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.995826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.995864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.996102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.996166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.996376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.996442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.996711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.996771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.996890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.996927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.997151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.997219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.997476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.997543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.997754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.997798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.997975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.998011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.998251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.998316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.998578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.998643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.998889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.998926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.999110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.999175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.999468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.999533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.999784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:43.999821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:43.999948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:44.000013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:44.000253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:44.000329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:44.000645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:44.000743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:44.000923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-19 03:16:44.000959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-19 03:16:44.001195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.001251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.001572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.001637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.001866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.001903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.002160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.002217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.002479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.002545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.002823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.002859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.003039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.003104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.003428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.003495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.003790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.003827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.003951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.003987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.004130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.004167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.004310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.004390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.004662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.004707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.004850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.004885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.005029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.005088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.005372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.005469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.005730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.005769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.005948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.005984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.006121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.006157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.006384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.006449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.006683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.006759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.006909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.006944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.007172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.007238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.007551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.007614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.007844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.007880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.008053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.008118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.008407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.008470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.008727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.008782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.008933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.008969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.009123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.009158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.009266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.009302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.009437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.009472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.009610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.009681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.009878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.009914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.010121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.010185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.010398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.010465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.010756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.010822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.011123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.011186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.011444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.011509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.011795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-19 03:16:44.011861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-19 03:16:44.012163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.012226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.012474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.012539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.012783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.012861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.013107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.013173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.013419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.013486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.013789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.013855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 403963 Killed "${NVMF_APP[@]}" "$@" 00:35:33.481 [2024-11-19 03:16:44.014118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.014182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.014435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.014500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.014749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.014816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.015018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.015083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:33.481 [2024-11-19 03:16:44.015333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.015397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:33.481 [2024-11-19 03:16:44.015641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.015717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:33.481 [2024-11-19 03:16:44.015924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.015988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:33.481 [2024-11-19 03:16:44.016280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.016344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.016606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.016670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.016897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.016961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.017164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.017228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.017482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.017546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.017815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.017881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.018091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.018155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.018365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.018399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.018597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.018661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.018882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.018947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.019241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.019304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.019602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.019666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.019971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.020036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.020258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.020323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.020592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.020656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.020883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.020918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.021015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.021049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.021191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.021226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.021407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.021472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.021748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.021782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.021901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.021934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.022193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.022227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.022441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.022505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.022756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.022791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-19 03:16:44.022932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-19 03:16:44.022967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=404519 00:35:33.481 [2024-11-19 03:16:44.023228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:33.482 [2024-11-19 03:16:44.023292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 404519 00:35:33.482 [2024-11-19 03:16:44.023533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.023597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.023801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 404519 ']' 00:35:33.482 [2024-11-19 03:16:44.023836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.023948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.023983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.024129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:33.482 [2024-11-19 03:16:44.024163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.024273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.024307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:33.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.024417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.024452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.024588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:33.482 [2024-11-19 03:16:44.024621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.024730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.024765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.024911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.024946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.025092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.025125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.025244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.025279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.025421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.025455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.025562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.025596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.025750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.025786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.025906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.025942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.026048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.026082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.026224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.026288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.026477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.026538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.026751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.026785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.026892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.026927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.027067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.027102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.027211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.027245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.027359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.027393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.027532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.027566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.027685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.027727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.027840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.027875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.027986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.028020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.028137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.028171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.028283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.028318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-19 03:16:44.028423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-19 03:16:44.028458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.028573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.028609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.028714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.028750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.028869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.028903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.029006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.029041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.029172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.029206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.029313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.029348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.029461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.029495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.029663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.029726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.029871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.029908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.030012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.030047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.030170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.030203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.030326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.030360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.030500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.030534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.030647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.030681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.030831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.030864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.030973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.031006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.031137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.031169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.031299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.031332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.031471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.031503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.031629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.031661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.031873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.031924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.032078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.032114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.032251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.032286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.032389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.032422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.032525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.032558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.032677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.032718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.032846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.032879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.032981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.033014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.033175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.033206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.033335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.033366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.033501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.033532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.033659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.033697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.033809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.033842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.033950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.033981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.034090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.034122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.034254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.034288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.034400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.034432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.034543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.034577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.034729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.034763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.034871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.034903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-19 03:16:44.035047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-19 03:16:44.035096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.035197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.035231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.035348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.035379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.035542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.035573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.035711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.035744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.035854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.035886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.035999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.036031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.036130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.036170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.036297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.036329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.036427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.036458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.036619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.036667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.036807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.036841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.036941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.036973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.037073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.037112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.037214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.037245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.037337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.037370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.037505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.037536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.037641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.037672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.037786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.037817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.037919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.037949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.038075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.038105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.038206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.038237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.038367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.038397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.038550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.038581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.038685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.038727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.038830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.038861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.038965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.038996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.039085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.039117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.039279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.039312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.039438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.039469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.039602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.039632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.039728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.039760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.039882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.039913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.040014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.040045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.040201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.040237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.040401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.040431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.040572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.040605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.040738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.040772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.040881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.040911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.041034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.041065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.041172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-19 03:16:44.041203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-19 03:16:44.041336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.041367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.041520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.041552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.041704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.041750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.041869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.041902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.042061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.042093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.042186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.042217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.042343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.042374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.042515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.042547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.042648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.042681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.042786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.042818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.042947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.042978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.043072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.043104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.043211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.043242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.043385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.043418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.043555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.043601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.043716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.043749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.043843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.043876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.043984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.044015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.044117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.044157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.044263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.044294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.044406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.044438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.044538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.044568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.044704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.044737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.044847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.044878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.044971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.045012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.045109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.045141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.045264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.045296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.045397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.045428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.045523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.045564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.045661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.045700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.045816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.045846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.046012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.046052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.046179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.046207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.046323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.046352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.046484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.046512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.046597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.046624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.046715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.046742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.046854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.046882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.046981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.047011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.047145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.047186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.047298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-19 03:16:44.047327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-19 03:16:44.047440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.047477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.047596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.047623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.047767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.047806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.047895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.047921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.048016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.048046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.048130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.048157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.048252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.048281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.048381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.048408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.048508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.048539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.048628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.048656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.048791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.048819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.048907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.048935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.049083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.049110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.049241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.049270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.049352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.049380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.049470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.049497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.049622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.049651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.049778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.049806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.049896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.049923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.050012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.050045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.050131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.050159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.050246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.050273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.050357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.050386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.050475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.050502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.050634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.050673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.050783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.050811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.050898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.050925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.051045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.051072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.051148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.051175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.051294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.051321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.051406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.051433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.051511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.051538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.051615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.051642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.051737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.051764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.051840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.051869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.051961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.051989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.052099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.052126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.052210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.052241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.052390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.052418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.052540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.052568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.052660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-19 03:16:44.052699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-19 03:16:44.052824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.052851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.052949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.052987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.053106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.053133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.053214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.053240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.053381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.053409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.053501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.053529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.053625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.053666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.053779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.053807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.053888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.053915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.054028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.054058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.054146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.054177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.054262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.054289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.054398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.054425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.054547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.054574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.054658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.054686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.054787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.054816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.054905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.054932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.055048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.055076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.055183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.055215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.055314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.055355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.055455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-19 03:16:44.055484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-19 03:16:44.055568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.777 [2024-11-19 03:16:44.055595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.777 qpair failed and we were unable to recover it. 00:35:33.777 [2024-11-19 03:16:44.055696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.777 [2024-11-19 03:16:44.055723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.777 qpair failed and we were unable to recover it. 00:35:33.777 [2024-11-19 03:16:44.055810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.777 [2024-11-19 03:16:44.055837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.777 qpair failed and we were unable to recover it. 00:35:33.778 [2024-11-19 03:16:44.055947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.778 [2024-11-19 03:16:44.055976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.778 qpair failed and we were unable to recover it. 00:35:33.778 [2024-11-19 03:16:44.056065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.778 [2024-11-19 03:16:44.056094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.778 qpair failed and we were unable to recover it. 00:35:33.778 [2024-11-19 03:16:44.056187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.778 [2024-11-19 03:16:44.056215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.778 qpair failed and we were unable to recover it. 00:35:33.778 [2024-11-19 03:16:44.056301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.778 [2024-11-19 03:16:44.056328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.778 qpair failed and we were unable to recover it. 00:35:33.778 [2024-11-19 03:16:44.056444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.778 [2024-11-19 03:16:44.056471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.778 qpair failed and we were unable to recover it. 00:35:33.778 [2024-11-19 03:16:44.056589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.778 [2024-11-19 03:16:44.056619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.778 qpair failed and we were unable to recover it. 00:35:33.778 [2024-11-19 03:16:44.056712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.778 [2024-11-19 03:16:44.056741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.778 qpair failed and we were unable to recover it. 00:35:33.778 [2024-11-19 03:16:44.056834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.778 [2024-11-19 03:16:44.056862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.778 qpair failed and we were unable to recover it. 00:35:33.778 [2024-11-19 03:16:44.056999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.778 [2024-11-19 03:16:44.057027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.779 qpair failed and we were unable to recover it. 00:35:33.779 [2024-11-19 03:16:44.057142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.779 [2024-11-19 03:16:44.057170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.779 qpair failed and we were unable to recover it. 00:35:33.779 [2024-11-19 03:16:44.057284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.779 [2024-11-19 03:16:44.057312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.779 qpair failed and we were unable to recover it. 00:35:33.779 [2024-11-19 03:16:44.057400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.779 [2024-11-19 03:16:44.057428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.779 qpair failed and we were unable to recover it. 00:35:33.779 [2024-11-19 03:16:44.057563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.779 [2024-11-19 03:16:44.057604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.779 qpair failed and we were unable to recover it. 00:35:33.779 [2024-11-19 03:16:44.057707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.779 [2024-11-19 03:16:44.057738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.779 qpair failed and we were unable to recover it. 00:35:33.780 [2024-11-19 03:16:44.057831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.780 [2024-11-19 03:16:44.057860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.780 qpair failed and we were unable to recover it. 00:35:33.780 [2024-11-19 03:16:44.057945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.780 [2024-11-19 03:16:44.057972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.780 qpair failed and we were unable to recover it. 00:35:33.780 [2024-11-19 03:16:44.058093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.780 [2024-11-19 03:16:44.058120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.780 qpair failed and we were unable to recover it. 00:35:33.780 [2024-11-19 03:16:44.058204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.780 [2024-11-19 03:16:44.058242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.780 qpair failed and we were unable to recover it. 00:35:33.780 [2024-11-19 03:16:44.058338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.780 [2024-11-19 03:16:44.058365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.780 qpair failed and we were unable to recover it. 00:35:33.780 [2024-11-19 03:16:44.058456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.780 [2024-11-19 03:16:44.058497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.780 qpair failed and we were unable to recover it. 00:35:33.780 [2024-11-19 03:16:44.058604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.780 [2024-11-19 03:16:44.058632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.781 qpair failed and we were unable to recover it. 00:35:33.781 [2024-11-19 03:16:44.058751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.781 [2024-11-19 03:16:44.058780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.781 qpair failed and we were unable to recover it. 00:35:33.781 [2024-11-19 03:16:44.058871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.781 [2024-11-19 03:16:44.058898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.781 qpair failed and we were unable to recover it. 00:35:33.781 [2024-11-19 03:16:44.058978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.781 [2024-11-19 03:16:44.059005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.781 qpair failed and we were unable to recover it. 00:35:33.781 [2024-11-19 03:16:44.059084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.781 [2024-11-19 03:16:44.059124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.781 qpair failed and we were unable to recover it. 00:35:33.782 [2024-11-19 03:16:44.059239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.782 [2024-11-19 03:16:44.059267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.782 qpair failed and we were unable to recover it. 00:35:33.782 [2024-11-19 03:16:44.059372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.782 [2024-11-19 03:16:44.059402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.782 qpair failed and we were unable to recover it. 00:35:33.782 [2024-11-19 03:16:44.059520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.782 [2024-11-19 03:16:44.059547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.782 qpair failed and we were unable to recover it. 00:35:33.782 [2024-11-19 03:16:44.059661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.783 [2024-11-19 03:16:44.059705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.783 qpair failed and we were unable to recover it. 00:35:33.783 [2024-11-19 03:16:44.059789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.783 [2024-11-19 03:16:44.059816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.783 qpair failed and we were unable to recover it. 00:35:33.783 [2024-11-19 03:16:44.059895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.783 [2024-11-19 03:16:44.059929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.783 qpair failed and we were unable to recover it. 00:35:33.783 [2024-11-19 03:16:44.060027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.783 [2024-11-19 03:16:44.060054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.783 qpair failed and we were unable to recover it. 00:35:33.783 [2024-11-19 03:16:44.060137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.783 [2024-11-19 03:16:44.060164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.783 qpair failed and we were unable to recover it. 00:35:33.783 [2024-11-19 03:16:44.060246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.783 [2024-11-19 03:16:44.060274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.783 qpair failed and we were unable to recover it. 00:35:33.783 [2024-11-19 03:16:44.060356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.783 [2024-11-19 03:16:44.060383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.783 qpair failed and we were unable to recover it. 00:35:33.784 [2024-11-19 03:16:44.060501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.784 [2024-11-19 03:16:44.060528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.784 qpair failed and we were unable to recover it. 00:35:33.784 [2024-11-19 03:16:44.060643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.784 [2024-11-19 03:16:44.060671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.784 qpair failed and we were unable to recover it. 00:35:33.784 [2024-11-19 03:16:44.060774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.784 [2024-11-19 03:16:44.060814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.784 qpair failed and we were unable to recover it. 00:35:33.784 [2024-11-19 03:16:44.060936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.784 [2024-11-19 03:16:44.060965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.784 qpair failed and we were unable to recover it. 00:35:33.784 [2024-11-19 03:16:44.061094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.785 [2024-11-19 03:16:44.061122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.785 qpair failed and we were unable to recover it. 00:35:33.785 [2024-11-19 03:16:44.061231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.785 [2024-11-19 03:16:44.061258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.785 qpair failed and we were unable to recover it. 00:35:33.785 [2024-11-19 03:16:44.061346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.785 [2024-11-19 03:16:44.061383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.785 qpair failed and we were unable to recover it. 00:35:33.785 [2024-11-19 03:16:44.061478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.785 [2024-11-19 03:16:44.061505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.785 qpair failed and we were unable to recover it. 00:35:33.785 [2024-11-19 03:16:44.061603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.785 [2024-11-19 03:16:44.061643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.786 qpair failed and we were unable to recover it. 00:35:33.786 [2024-11-19 03:16:44.061743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.786 [2024-11-19 03:16:44.061772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.786 qpair failed and we were unable to recover it. 00:35:33.786 [2024-11-19 03:16:44.061885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.786 [2024-11-19 03:16:44.061912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.786 qpair failed and we were unable to recover it. 00:35:33.786 [2024-11-19 03:16:44.062030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.786 [2024-11-19 03:16:44.062056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.786 qpair failed and we were unable to recover it. 00:35:33.786 [2024-11-19 03:16:44.062138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.786 [2024-11-19 03:16:44.062166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.786 qpair failed and we were unable to recover it. 00:35:33.787 [2024-11-19 03:16:44.062250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.787 [2024-11-19 03:16:44.062279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.787 qpair failed and we were unable to recover it. 00:35:33.787 [2024-11-19 03:16:44.062364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.787 [2024-11-19 03:16:44.062393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.787 qpair failed and we were unable to recover it. 00:35:33.787 [2024-11-19 03:16:44.062475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.787 [2024-11-19 03:16:44.062502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.787 qpair failed and we were unable to recover it. 00:35:33.787 [2024-11-19 03:16:44.062612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.787 [2024-11-19 03:16:44.062639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.787 qpair failed and we were unable to recover it. 00:35:33.787 [2024-11-19 03:16:44.062775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.787 [2024-11-19 03:16:44.062803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.787 qpair failed and we were unable to recover it. 00:35:33.787 [2024-11-19 03:16:44.062893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.787 [2024-11-19 03:16:44.062921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.787 qpair failed and we were unable to recover it. 00:35:33.788 [2024-11-19 03:16:44.063009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.788 [2024-11-19 03:16:44.063036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.788 qpair failed and we were unable to recover it. 00:35:33.788 [2024-11-19 03:16:44.063134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.788 [2024-11-19 03:16:44.063162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.788 qpair failed and we were unable to recover it. 00:35:33.788 [2024-11-19 03:16:44.063255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.788 [2024-11-19 03:16:44.063283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.788 qpair failed and we were unable to recover it. 00:35:33.788 [2024-11-19 03:16:44.063390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.788 [2024-11-19 03:16:44.063417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.788 qpair failed and we were unable to recover it. 00:35:33.788 [2024-11-19 03:16:44.063563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.788 [2024-11-19 03:16:44.063591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.788 qpair failed and we were unable to recover it. 00:35:33.788 [2024-11-19 03:16:44.063700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.789 [2024-11-19 03:16:44.063728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.789 qpair failed and we were unable to recover it. 00:35:33.789 [2024-11-19 03:16:44.063817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.789 [2024-11-19 03:16:44.063844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.789 qpair failed and we were unable to recover it. 00:35:33.789 [2024-11-19 03:16:44.063941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.789 [2024-11-19 03:16:44.063972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.789 qpair failed and we were unable to recover it. 00:35:33.789 [2024-11-19 03:16:44.064061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.789 [2024-11-19 03:16:44.064089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.789 qpair failed and we were unable to recover it. 00:35:33.789 [2024-11-19 03:16:44.064214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.789 [2024-11-19 03:16:44.064243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.789 qpair failed and we were unable to recover it. 00:35:33.789 [2024-11-19 03:16:44.064351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.790 [2024-11-19 03:16:44.064379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.790 qpair failed and we were unable to recover it. 00:35:33.790 [2024-11-19 03:16:44.064499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.790 [2024-11-19 03:16:44.064525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.790 qpair failed and we were unable to recover it. 00:35:33.790 [2024-11-19 03:16:44.064635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.790 [2024-11-19 03:16:44.064661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.790 qpair failed and we were unable to recover it. 00:35:33.790 [2024-11-19 03:16:44.064756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.790 [2024-11-19 03:16:44.064783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.790 qpair failed and we were unable to recover it. 00:35:33.790 [2024-11-19 03:16:44.064870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.790 [2024-11-19 03:16:44.064896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.790 qpair failed and we were unable to recover it. 00:35:33.790 [2024-11-19 03:16:44.064993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.790 [2024-11-19 03:16:44.065021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.790 qpair failed and we were unable to recover it. 00:35:33.790 [2024-11-19 03:16:44.065121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.791 [2024-11-19 03:16:44.065147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.791 qpair failed and we were unable to recover it. 00:35:33.791 [2024-11-19 03:16:44.065243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.791 [2024-11-19 03:16:44.065272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.791 qpair failed and we were unable to recover it. 00:35:33.791 [2024-11-19 03:16:44.065396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.791 [2024-11-19 03:16:44.065423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.791 qpair failed and we were unable to recover it. 00:35:33.791 [2024-11-19 03:16:44.065513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.791 [2024-11-19 03:16:44.065539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.791 qpair failed and we were unable to recover it. 00:35:33.791 [2024-11-19 03:16:44.065635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.792 [2024-11-19 03:16:44.065662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.792 qpair failed and we were unable to recover it. 00:35:33.792 [2024-11-19 03:16:44.065761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.792 [2024-11-19 03:16:44.065790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.792 qpair failed and we were unable to recover it. 00:35:33.792 [2024-11-19 03:16:44.065908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.792 [2024-11-19 03:16:44.065934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.792 qpair failed and we were unable to recover it. 00:35:33.792 [2024-11-19 03:16:44.066017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.793 [2024-11-19 03:16:44.066043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.793 qpair failed and we were unable to recover it. 00:35:33.793 [2024-11-19 03:16:44.066121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.793 [2024-11-19 03:16:44.066158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.793 qpair failed and we were unable to recover it. 00:35:33.793 [2024-11-19 03:16:44.066286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.793 [2024-11-19 03:16:44.066326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.793 qpair failed and we were unable to recover it. 00:35:33.793 [2024-11-19 03:16:44.066450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.793 [2024-11-19 03:16:44.066484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.793 qpair failed and we were unable to recover it. 00:35:33.793 [2024-11-19 03:16:44.066604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.793 [2024-11-19 03:16:44.066632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.793 qpair failed and we were unable to recover it. 00:35:33.794 [2024-11-19 03:16:44.066731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.794 [2024-11-19 03:16:44.066759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.794 qpair failed and we were unable to recover it. 00:35:33.794 [2024-11-19 03:16:44.066868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.794 [2024-11-19 03:16:44.066895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.794 qpair failed and we were unable to recover it. 00:35:33.794 [2024-11-19 03:16:44.066980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.794 [2024-11-19 03:16:44.067008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.794 qpair failed and we were unable to recover it. 00:35:33.794 [2024-11-19 03:16:44.067099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.794 [2024-11-19 03:16:44.067126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.794 qpair failed and we were unable to recover it. 00:35:33.794 [2024-11-19 03:16:44.067217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.794 [2024-11-19 03:16:44.067250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.794 qpair failed and we were unable to recover it. 00:35:33.794 [2024-11-19 03:16:44.067346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.794 [2024-11-19 03:16:44.067372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.794 qpair failed and we were unable to recover it. 00:35:33.794 [2024-11-19 03:16:44.067468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.794 [2024-11-19 03:16:44.067496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.794 qpair failed and we were unable to recover it. 00:35:33.794 [2024-11-19 03:16:44.067615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.794 [2024-11-19 03:16:44.067641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.794 qpair failed and we were unable to recover it. 00:35:33.795 [2024-11-19 03:16:44.067724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.795 [2024-11-19 03:16:44.067751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.795 qpair failed and we were unable to recover it. 00:35:33.795 [2024-11-19 03:16:44.067847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.795 [2024-11-19 03:16:44.067874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.795 qpair failed and we were unable to recover it. 00:35:33.795 [2024-11-19 03:16:44.067956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.795 [2024-11-19 03:16:44.067982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.795 qpair failed and we were unable to recover it. 00:35:33.795 [2024-11-19 03:16:44.068129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.795 [2024-11-19 03:16:44.068163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.795 qpair failed and we were unable to recover it. 00:35:33.795 [2024-11-19 03:16:44.068244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.795 [2024-11-19 03:16:44.068271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.795 qpair failed and we were unable to recover it. 00:35:33.795 [2024-11-19 03:16:44.068388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.795 [2024-11-19 03:16:44.068428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.795 qpair failed and we were unable to recover it. 00:35:33.795 [2024-11-19 03:16:44.068535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.795 [2024-11-19 03:16:44.068576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.795 qpair failed and we were unable to recover it. 00:35:33.795 [2024-11-19 03:16:44.068722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.796 [2024-11-19 03:16:44.068751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.796 qpair failed and we were unable to recover it. 00:35:33.796 [2024-11-19 03:16:44.068893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.796 [2024-11-19 03:16:44.068920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.796 qpair failed and we were unable to recover it. 00:35:33.796 [2024-11-19 03:16:44.069007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.796 [2024-11-19 03:16:44.069034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.796 qpair failed and we were unable to recover it. 00:35:33.796 [2024-11-19 03:16:44.069153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.796 [2024-11-19 03:16:44.069180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.796 qpair failed and we were unable to recover it. 00:35:33.796 [2024-11-19 03:16:44.069257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.796 [2024-11-19 03:16:44.069294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.796 qpair failed and we were unable to recover it. 00:35:33.796 [2024-11-19 03:16:44.069422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.796 [2024-11-19 03:16:44.069452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.796 qpair failed and we were unable to recover it. 00:35:33.796 [2024-11-19 03:16:44.069569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.797 [2024-11-19 03:16:44.069610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.797 qpair failed and we were unable to recover it. 00:35:33.797 [2024-11-19 03:16:44.069708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.797 [2024-11-19 03:16:44.069736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.797 qpair failed and we were unable to recover it. 00:35:33.797 [2024-11-19 03:16:44.069876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.797 [2024-11-19 03:16:44.069903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.797 qpair failed and we were unable to recover it. 00:35:33.797 [2024-11-19 03:16:44.070001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.797 [2024-11-19 03:16:44.070027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.797 qpair failed and we were unable to recover it. 00:35:33.797 [2024-11-19 03:16:44.070146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.797 [2024-11-19 03:16:44.070173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.797 qpair failed and we were unable to recover it. 00:35:33.797 [2024-11-19 03:16:44.070277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.797 [2024-11-19 03:16:44.070305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.797 qpair failed and we were unable to recover it. 00:35:33.797 [2024-11-19 03:16:44.070427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.798 [2024-11-19 03:16:44.070455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.798 qpair failed and we were unable to recover it. 00:35:33.798 [2024-11-19 03:16:44.070544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.798 [2024-11-19 03:16:44.070574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.798 qpair failed and we were unable to recover it. 00:35:33.798 [2024-11-19 03:16:44.070667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.798 [2024-11-19 03:16:44.070701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.798 qpair failed and we were unable to recover it. 00:35:33.798 [2024-11-19 03:16:44.070792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.798 [2024-11-19 03:16:44.070819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.798 qpair failed and we were unable to recover it. 00:35:33.798 [2024-11-19 03:16:44.070916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.798 [2024-11-19 03:16:44.070944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.798 qpair failed and we were unable to recover it. 00:35:33.798 [2024-11-19 03:16:44.071041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.798 [2024-11-19 03:16:44.071069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.798 qpair failed and we were unable to recover it. 00:35:33.798 [2024-11-19 03:16:44.071187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.798 [2024-11-19 03:16:44.071218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.798 qpair failed and we were unable to recover it. 00:35:33.798 [2024-11-19 03:16:44.071305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.798 [2024-11-19 03:16:44.071332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.798 qpair failed and we were unable to recover it. 00:35:33.798 [2024-11-19 03:16:44.071416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.799 [2024-11-19 03:16:44.071444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.799 qpair failed and we were unable to recover it. 00:35:33.799 [2024-11-19 03:16:44.071552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.799 [2024-11-19 03:16:44.071579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.799 qpair failed and we were unable to recover it. 00:35:33.799 [2024-11-19 03:16:44.071664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.799 [2024-11-19 03:16:44.071718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.799 qpair failed and we were unable to recover it. 00:35:33.799 [2024-11-19 03:16:44.071839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-19 03:16:44.071866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-19 03:16:44.071944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-19 03:16:44.071970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-19 03:16:44.072089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-19 03:16:44.072126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-19 03:16:44.072218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-19 03:16:44.072244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-19 03:16:44.072339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-19 03:16:44.072366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-19 03:16:44.072473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-19 03:16:44.072500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-19 03:16:44.072576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-19 03:16:44.072602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-19 03:16:44.072685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-19 03:16:44.072720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-19 03:16:44.072818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-19 03:16:44.072850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-19 03:16:44.073002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-19 03:16:44.073031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-19 03:16:44.073149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-19 03:16:44.073177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-19 03:16:44.073289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-19 03:16:44.073318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-19 03:16:44.073426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-19 03:16:44.073465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-19 03:16:44.073545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-19 03:16:44.073573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-19 03:16:44.073681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-19 03:16:44.073717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-19 03:16:44.073809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-19 03:16:44.073836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-19 03:16:44.073949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-19 03:16:44.073977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-19 03:16:44.074007] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:33.803 [2024-11-19 03:16:44.074097] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type[2024-11-19 03:16:44.074098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 =auto ] 00:35:33.803 [2024-11-19 03:16:44.074126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-19 03:16:44.074242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-19 03:16:44.074269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-19 03:16:44.074400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-19 03:16:44.074425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-19 03:16:44.074510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-19 03:16:44.074539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-19 03:16:44.074649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-19 03:16:44.074674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-19 03:16:44.074812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-19 03:16:44.074852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-19 03:16:44.074941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-19 03:16:44.074981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-19 03:16:44.075103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-19 03:16:44.075131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-19 03:16:44.075230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-19 03:16:44.075258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-19 03:16:44.075408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-19 03:16:44.075435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-19 03:16:44.075520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-19 03:16:44.075547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-19 03:16:44.075631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-19 03:16:44.075658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-19 03:16:44.075757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-19 03:16:44.075784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-19 03:16:44.075878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-19 03:16:44.075904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-19 03:16:44.075996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-19 03:16:44.076024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-19 03:16:44.076116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-19 03:16:44.076143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-19 03:16:44.076250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-19 03:16:44.076289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-19 03:16:44.076374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-19 03:16:44.076402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-19 03:16:44.076524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-19 03:16:44.076555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-19 03:16:44.076644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-19 03:16:44.076672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-19 03:16:44.076769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-19 03:16:44.076796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-19 03:16:44.076887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-19 03:16:44.076914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-19 03:16:44.076995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-19 03:16:44.077022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-19 03:16:44.077134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-19 03:16:44.077161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-19 03:16:44.077282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-19 03:16:44.077309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-19 03:16:44.077400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-19 03:16:44.077428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-19 03:16:44.077551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-19 03:16:44.077580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-19 03:16:44.077693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-19 03:16:44.077722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-19 03:16:44.077801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-19 03:16:44.077828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-19 03:16:44.077939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-19 03:16:44.077978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-19 03:16:44.078067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-19 03:16:44.078098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-19 03:16:44.078214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-19 03:16:44.078244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-19 03:16:44.078368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-19 03:16:44.078397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-19 03:16:44.078512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-19 03:16:44.078544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-19 03:16:44.078634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-19 03:16:44.078662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-19 03:16:44.078897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-19 03:16:44.078925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-19 03:16:44.079030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-19 03:16:44.079057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-19 03:16:44.079150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-19 03:16:44.079179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-19 03:16:44.079300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-19 03:16:44.079327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-19 03:16:44.079419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-19 03:16:44.079447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-19 03:16:44.079535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-19 03:16:44.079562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-19 03:16:44.079657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-19 03:16:44.079684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-19 03:16:44.079782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-19 03:16:44.079810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-19 03:16:44.079922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-19 03:16:44.079949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-19 03:16:44.080086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-19 03:16:44.080113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-19 03:16:44.080196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-19 03:16:44.080230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-19 03:16:44.080373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-19 03:16:44.080402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-19 03:16:44.080502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-19 03:16:44.080552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-19 03:16:44.080675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-19 03:16:44.080716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-19 03:16:44.080801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-19 03:16:44.080829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-19 03:16:44.080940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-19 03:16:44.080966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-19 03:16:44.081057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-19 03:16:44.081085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-19 03:16:44.081181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-19 03:16:44.081208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-19 03:16:44.081299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-19 03:16:44.081332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-19 03:16:44.081428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-19 03:16:44.081456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-19 03:16:44.081546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-19 03:16:44.081573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-19 03:16:44.081678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-19 03:16:44.081725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-19 03:16:44.081819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-19 03:16:44.081848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-19 03:16:44.081949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-19 03:16:44.081978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-19 03:16:44.082087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-19 03:16:44.082115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-19 03:16:44.082199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-19 03:16:44.082228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-19 03:16:44.082346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-19 03:16:44.082374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-19 03:16:44.082455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-19 03:16:44.082484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-19 03:16:44.082606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-19 03:16:44.082636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-19 03:16:44.082765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-19 03:16:44.082794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-19 03:16:44.082912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-19 03:16:44.082940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-19 03:16:44.083091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-19 03:16:44.083118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-19 03:16:44.083224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-19 03:16:44.083250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-19 03:16:44.083356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-19 03:16:44.083385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-19 03:16:44.083480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-19 03:16:44.083508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-19 03:16:44.083625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-19 03:16:44.083658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-19 03:16:44.083758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-19 03:16:44.083785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-19 03:16:44.083872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-19 03:16:44.083909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-19 03:16:44.084017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-19 03:16:44.084044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-19 03:16:44.084161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-19 03:16:44.084188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-19 03:16:44.084278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-19 03:16:44.084307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-19 03:16:44.084425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-19 03:16:44.084458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-19 03:16:44.084550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-19 03:16:44.084579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-19 03:16:44.084679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-19 03:16:44.084722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-19 03:16:44.084836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-19 03:16:44.084875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-19 03:16:44.084970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-19 03:16:44.084999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-19 03:16:44.085096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-19 03:16:44.085122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-19 03:16:44.085208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-19 03:16:44.085245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-19 03:16:44.085376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-19 03:16:44.085403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-19 03:16:44.085528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-19 03:16:44.085556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-19 03:16:44.085646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-19 03:16:44.085673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-19 03:16:44.085801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-19 03:16:44.085829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-19 03:16:44.085974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-19 03:16:44.086001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-19 03:16:44.086085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-19 03:16:44.086113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-19 03:16:44.086264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-19 03:16:44.086290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-19 03:16:44.086442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-19 03:16:44.086470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-19 03:16:44.086561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-19 03:16:44.086591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-19 03:16:44.086712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-19 03:16:44.086754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-19 03:16:44.086848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-19 03:16:44.086875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-19 03:16:44.086977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-19 03:16:44.087004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-19 03:16:44.087116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-19 03:16:44.087143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-19 03:16:44.087230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-19 03:16:44.087263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-19 03:16:44.087382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-19 03:16:44.087414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-19 03:16:44.087538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-19 03:16:44.087564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-19 03:16:44.087669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-19 03:16:44.087720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-19 03:16:44.087809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-19 03:16:44.087838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-19 03:16:44.087957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-19 03:16:44.087989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-19 03:16:44.088080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-19 03:16:44.088108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-19 03:16:44.088196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-19 03:16:44.088223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-19 03:16:44.088313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-19 03:16:44.088340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-19 03:16:44.088430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-19 03:16:44.088457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-19 03:16:44.088598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-19 03:16:44.088634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-19 03:16:44.088748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-19 03:16:44.088775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-19 03:16:44.088862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-19 03:16:44.088890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-19 03:16:44.088979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-19 03:16:44.089005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-19 03:16:44.089093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-19 03:16:44.089120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-19 03:16:44.089244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-19 03:16:44.089271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-19 03:16:44.089386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-19 03:16:44.089414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-19 03:16:44.089542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-19 03:16:44.089583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-19 03:16:44.089731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-19 03:16:44.089759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-19 03:16:44.089854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-19 03:16:44.089883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-19 03:16:44.089972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-19 03:16:44.089999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-19 03:16:44.090095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-19 03:16:44.090121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-19 03:16:44.090197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-19 03:16:44.090224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-19 03:16:44.090312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-19 03:16:44.090338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-19 03:16:44.090468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-19 03:16:44.090509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-19 03:16:44.090599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-19 03:16:44.090639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-19 03:16:44.090751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-19 03:16:44.090780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-19 03:16:44.090879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-19 03:16:44.090906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-19 03:16:44.091009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-19 03:16:44.091038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-19 03:16:44.091154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-19 03:16:44.091191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-19 03:16:44.091271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-19 03:16:44.091298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-19 03:16:44.091384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-19 03:16:44.091422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-19 03:16:44.091512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-19 03:16:44.091539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-19 03:16:44.091658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-19 03:16:44.091684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-19 03:16:44.091774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-19 03:16:44.091801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-19 03:16:44.091919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-19 03:16:44.091945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-19 03:16:44.092069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-19 03:16:44.092097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-19 03:16:44.092209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-19 03:16:44.092237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-19 03:16:44.092355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-19 03:16:44.092384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-19 03:16:44.092466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-19 03:16:44.092492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-19 03:16:44.092588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-19 03:16:44.092616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-19 03:16:44.092709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-19 03:16:44.092745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-19 03:16:44.092835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-19 03:16:44.092862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-19 03:16:44.092950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-19 03:16:44.092987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-19 03:16:44.093110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-19 03:16:44.093136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-19 03:16:44.093215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-19 03:16:44.093244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-19 03:16:44.093328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-19 03:16:44.093356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-19 03:16:44.093467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-19 03:16:44.093494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-19 03:16:44.093608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-19 03:16:44.093635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-19 03:16:44.093731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-19 03:16:44.093760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-19 03:16:44.093855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-19 03:16:44.093883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-19 03:16:44.094010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-19 03:16:44.094037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-19 03:16:44.094154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-19 03:16:44.094182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-19 03:16:44.094279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-19 03:16:44.094308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-19 03:16:44.094400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-19 03:16:44.094429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-19 03:16:44.094524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-19 03:16:44.094552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-19 03:16:44.094661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-19 03:16:44.094687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-19 03:16:44.094782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-19 03:16:44.094809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-19 03:16:44.094899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-19 03:16:44.094926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-19 03:16:44.095051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-19 03:16:44.095078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-19 03:16:44.095153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-19 03:16:44.095178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-19 03:16:44.095293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-19 03:16:44.095322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-19 03:16:44.095413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-19 03:16:44.095452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-19 03:16:44.095600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-19 03:16:44.095628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-19 03:16:44.095753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-19 03:16:44.095781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-19 03:16:44.095867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-19 03:16:44.095895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-19 03:16:44.095978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-19 03:16:44.096005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-19 03:16:44.096120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-19 03:16:44.096147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-19 03:16:44.096292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-19 03:16:44.096319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-19 03:16:44.096438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-19 03:16:44.096465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-19 03:16:44.096577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-19 03:16:44.096606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-19 03:16:44.096720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-19 03:16:44.096748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-19 03:16:44.096858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-19 03:16:44.096885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-19 03:16:44.096970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-19 03:16:44.097000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-19 03:16:44.097118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-19 03:16:44.097145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-19 03:16:44.097231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-19 03:16:44.097258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-19 03:16:44.097339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-19 03:16:44.097367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-19 03:16:44.097483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-19 03:16:44.097509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-19 03:16:44.097662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-19 03:16:44.097708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-19 03:16:44.097800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-19 03:16:44.097828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-19 03:16:44.097922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-19 03:16:44.097950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-19 03:16:44.098070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-19 03:16:44.098101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-19 03:16:44.098191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-19 03:16:44.098217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-19 03:16:44.098363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-19 03:16:44.098391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-19 03:16:44.098537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-19 03:16:44.098566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-19 03:16:44.098653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-19 03:16:44.098698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-19 03:16:44.098814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-19 03:16:44.098841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-19 03:16:44.098931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-19 03:16:44.098958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-19 03:16:44.099049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-19 03:16:44.099077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-19 03:16:44.099185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-19 03:16:44.099212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-19 03:16:44.099320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-19 03:16:44.099348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-19 03:16:44.099493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-19 03:16:44.099519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-19 03:16:44.099601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-19 03:16:44.099629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-19 03:16:44.099737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-19 03:16:44.099765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-19 03:16:44.099905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-19 03:16:44.099931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-19 03:16:44.100064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-19 03:16:44.100091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-19 03:16:44.100233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-19 03:16:44.100260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-19 03:16:44.100353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-19 03:16:44.100381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-19 03:16:44.100463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-19 03:16:44.100490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-19 03:16:44.100633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-19 03:16:44.100660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-19 03:16:44.100773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-19 03:16:44.100801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-19 03:16:44.100894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.843 [2024-11-19 03:16:44.100920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.843 qpair failed and we were unable to recover it. 00:35:33.843 [2024-11-19 03:16:44.101040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.844 [2024-11-19 03:16:44.101067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.844 qpair failed and we were unable to recover it. 00:35:33.844 [2024-11-19 03:16:44.101186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.844 [2024-11-19 03:16:44.101214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.844 qpair failed and we were unable to recover it. 00:35:33.844 [2024-11-19 03:16:44.101332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.844 [2024-11-19 03:16:44.101359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.844 qpair failed and we were unable to recover it. 00:35:33.844 [2024-11-19 03:16:44.101481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.844 [2024-11-19 03:16:44.101510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.844 qpair failed and we were unable to recover it. 00:35:33.844 [2024-11-19 03:16:44.101619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.844 [2024-11-19 03:16:44.101645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.844 qpair failed and we were unable to recover it. 00:35:33.844 [2024-11-19 03:16:44.101771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.844 [2024-11-19 03:16:44.101799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.844 qpair failed and we were unable to recover it. 00:35:33.844 [2024-11-19 03:16:44.101879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.844 [2024-11-19 03:16:44.101910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.844 qpair failed and we were unable to recover it. 00:35:33.844 [2024-11-19 03:16:44.102016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.844 [2024-11-19 03:16:44.102043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.845 qpair failed and we were unable to recover it. 00:35:33.845 [2024-11-19 03:16:44.102138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.845 [2024-11-19 03:16:44.102165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.845 qpair failed and we were unable to recover it. 00:35:33.845 [2024-11-19 03:16:44.102285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.845 [2024-11-19 03:16:44.102313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.845 qpair failed and we were unable to recover it. 00:35:33.845 [2024-11-19 03:16:44.102437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.845 [2024-11-19 03:16:44.102466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.845 qpair failed and we were unable to recover it. 00:35:33.845 [2024-11-19 03:16:44.102580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.845 [2024-11-19 03:16:44.102607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.845 qpair failed and we were unable to recover it. 00:35:33.845 [2024-11-19 03:16:44.102716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.845 [2024-11-19 03:16:44.102744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.845 qpair failed and we were unable to recover it. 00:35:33.845 [2024-11-19 03:16:44.102861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.845 [2024-11-19 03:16:44.102888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.845 qpair failed and we were unable to recover it. 00:35:33.845 [2024-11-19 03:16:44.103015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.845 [2024-11-19 03:16:44.103055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.846 qpair failed and we were unable to recover it. 00:35:33.846 [2024-11-19 03:16:44.103147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.846 [2024-11-19 03:16:44.103176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.846 qpair failed and we were unable to recover it. 00:35:33.846 [2024-11-19 03:16:44.103258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.846 [2024-11-19 03:16:44.103286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.846 qpair failed and we were unable to recover it. 00:35:33.846 [2024-11-19 03:16:44.103401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.846 [2024-11-19 03:16:44.103427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.846 qpair failed and we were unable to recover it. 00:35:33.846 [2024-11-19 03:16:44.103548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.846 [2024-11-19 03:16:44.103574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.846 qpair failed and we were unable to recover it. 00:35:33.846 [2024-11-19 03:16:44.103681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.846 [2024-11-19 03:16:44.103714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.846 qpair failed and we were unable to recover it. 00:35:33.846 [2024-11-19 03:16:44.103834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.846 [2024-11-19 03:16:44.103861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.846 qpair failed and we were unable to recover it. 00:35:33.846 [2024-11-19 03:16:44.103981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.846 [2024-11-19 03:16:44.104010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.846 qpair failed and we were unable to recover it. 00:35:33.846 [2024-11-19 03:16:44.104099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.846 [2024-11-19 03:16:44.104126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.846 qpair failed and we were unable to recover it. 00:35:33.846 [2024-11-19 03:16:44.104219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.846 [2024-11-19 03:16:44.104247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.846 qpair failed and we were unable to recover it. 00:35:33.846 [2024-11-19 03:16:44.104331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.846 [2024-11-19 03:16:44.104361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.846 qpair failed and we were unable to recover it. 00:35:33.847 [2024-11-19 03:16:44.104449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.847 [2024-11-19 03:16:44.104476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.847 qpair failed and we were unable to recover it. 00:35:33.847 [2024-11-19 03:16:44.104555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.847 [2024-11-19 03:16:44.104581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.847 qpair failed and we were unable to recover it. 00:35:33.847 [2024-11-19 03:16:44.104705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.847 [2024-11-19 03:16:44.104732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.847 qpair failed and we were unable to recover it. 00:35:33.847 [2024-11-19 03:16:44.104814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.847 [2024-11-19 03:16:44.104841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.847 qpair failed and we were unable to recover it. 00:35:33.847 [2024-11-19 03:16:44.104952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.847 [2024-11-19 03:16:44.104990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.847 qpair failed and we were unable to recover it. 00:35:33.847 [2024-11-19 03:16:44.105068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.847 [2024-11-19 03:16:44.105094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.847 qpair failed and we were unable to recover it. 00:35:33.847 [2024-11-19 03:16:44.105204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.847 [2024-11-19 03:16:44.105231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.847 qpair failed and we were unable to recover it. 00:35:33.847 [2024-11-19 03:16:44.105339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.847 [2024-11-19 03:16:44.105365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.847 qpair failed and we were unable to recover it. 00:35:33.847 [2024-11-19 03:16:44.105483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.848 [2024-11-19 03:16:44.105512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.848 qpair failed and we were unable to recover it. 00:35:33.848 [2024-11-19 03:16:44.105599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.848 [2024-11-19 03:16:44.105626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.848 qpair failed and we were unable to recover it. 00:35:33.848 [2024-11-19 03:16:44.105793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.848 [2024-11-19 03:16:44.105833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.848 qpair failed and we were unable to recover it. 00:35:33.848 [2024-11-19 03:16:44.105957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.848 [2024-11-19 03:16:44.105990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.848 qpair failed and we were unable to recover it. 00:35:33.848 [2024-11-19 03:16:44.106073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.848 [2024-11-19 03:16:44.106100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.848 qpair failed and we were unable to recover it. 00:35:33.848 [2024-11-19 03:16:44.106215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.848 [2024-11-19 03:16:44.106241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.848 qpair failed and we were unable to recover it. 00:35:33.848 [2024-11-19 03:16:44.106382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.848 [2024-11-19 03:16:44.106409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.848 qpair failed and we were unable to recover it. 00:35:33.848 [2024-11-19 03:16:44.106505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.848 [2024-11-19 03:16:44.106532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.849 qpair failed and we were unable to recover it. 00:35:33.849 [2024-11-19 03:16:44.106628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.849 [2024-11-19 03:16:44.106656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.849 qpair failed and we were unable to recover it. 00:35:33.849 [2024-11-19 03:16:44.106797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.849 [2024-11-19 03:16:44.106835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.849 qpair failed and we were unable to recover it. 00:35:33.849 [2024-11-19 03:16:44.106922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.849 [2024-11-19 03:16:44.106950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.849 qpair failed and we were unable to recover it. 00:35:33.849 [2024-11-19 03:16:44.107057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.849 [2024-11-19 03:16:44.107083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.849 qpair failed and we were unable to recover it. 00:35:33.849 [2024-11-19 03:16:44.107199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.849 [2024-11-19 03:16:44.107226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.849 qpair failed and we were unable to recover it. 00:35:33.849 [2024-11-19 03:16:44.107345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.850 [2024-11-19 03:16:44.107377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.850 qpair failed and we were unable to recover it. 00:35:33.850 [2024-11-19 03:16:44.107495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.850 [2024-11-19 03:16:44.107522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.850 qpair failed and we were unable to recover it. 00:35:33.850 [2024-11-19 03:16:44.107610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.850 [2024-11-19 03:16:44.107638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.850 qpair failed and we were unable to recover it. 00:35:33.850 [2024-11-19 03:16:44.107773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.850 [2024-11-19 03:16:44.107801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.850 qpair failed and we were unable to recover it. 00:35:33.850 [2024-11-19 03:16:44.107894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.850 [2024-11-19 03:16:44.107921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.850 qpair failed and we were unable to recover it. 00:35:33.850 [2024-11-19 03:16:44.108070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.850 [2024-11-19 03:16:44.108097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.850 qpair failed and we were unable to recover it. 00:35:33.850 [2024-11-19 03:16:44.108215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.850 [2024-11-19 03:16:44.108250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.850 qpair failed and we were unable to recover it. 00:35:33.850 [2024-11-19 03:16:44.108375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.850 [2024-11-19 03:16:44.108402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.850 qpair failed and we were unable to recover it. 00:35:33.850 [2024-11-19 03:16:44.108521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.850 [2024-11-19 03:16:44.108547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.850 qpair failed and we were unable to recover it. 00:35:33.850 [2024-11-19 03:16:44.108667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.108713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.851 qpair failed and we were unable to recover it. 00:35:33.851 [2024-11-19 03:16:44.108794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.108821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.851 qpair failed and we were unable to recover it. 00:35:33.851 [2024-11-19 03:16:44.108965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.108999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.851 qpair failed and we were unable to recover it. 00:35:33.851 [2024-11-19 03:16:44.109091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.109117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.851 qpair failed and we were unable to recover it. 00:35:33.851 [2024-11-19 03:16:44.109211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.109238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.851 qpair failed and we were unable to recover it. 00:35:33.851 [2024-11-19 03:16:44.109338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.109368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.851 qpair failed and we were unable to recover it. 00:35:33.851 [2024-11-19 03:16:44.109512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.109538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.851 qpair failed and we were unable to recover it. 00:35:33.851 [2024-11-19 03:16:44.109667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.109711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.851 qpair failed and we were unable to recover it. 00:35:33.851 [2024-11-19 03:16:44.109805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.109832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.851 qpair failed and we were unable to recover it. 00:35:33.851 [2024-11-19 03:16:44.109971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.109999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.851 qpair failed and we were unable to recover it. 00:35:33.851 [2024-11-19 03:16:44.110114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.110141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.851 qpair failed and we were unable to recover it. 00:35:33.851 [2024-11-19 03:16:44.110252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.110279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.851 qpair failed and we were unable to recover it. 00:35:33.851 [2024-11-19 03:16:44.110374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.110413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.851 qpair failed and we were unable to recover it. 00:35:33.851 [2024-11-19 03:16:44.110506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.110535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.851 qpair failed and we were unable to recover it. 00:35:33.851 [2024-11-19 03:16:44.110676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.110716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.851 qpair failed and we were unable to recover it. 00:35:33.851 [2024-11-19 03:16:44.110798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.110825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.851 qpair failed and we were unable to recover it. 00:35:33.851 [2024-11-19 03:16:44.110936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.110963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.851 qpair failed and we were unable to recover it. 00:35:33.851 [2024-11-19 03:16:44.111061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.851 [2024-11-19 03:16:44.111087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.111163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.111191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.111302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.111329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.111444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.111471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.111585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.111612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.111729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.111756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.111852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.111880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.112007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.112036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.112179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.112206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.112289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.112316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.112404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.112432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.112531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.112570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.112709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.112738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.112854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.112881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.112998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.113029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.113176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.113202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.113318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.113347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.113465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.113493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.113610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.113637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.113764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.113791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.113879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.113906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.113999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.114026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.114119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.114147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.114260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.114288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.114410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.114449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.114543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.114571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.114671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.114712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.114805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.114833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.114930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.114958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.115076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.115103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.115197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.115224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.115306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.115332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.115431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.115458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.115578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.115606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.115784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.115833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.115922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.115951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.116074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.116102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.116184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.116211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.116295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.116322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.116401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.116428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.116546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.116573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.116701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.116742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.116889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.116918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.117011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.117037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.117132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.852 [2024-11-19 03:16:44.117160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.852 qpair failed and we were unable to recover it. 00:35:33.852 [2024-11-19 03:16:44.117251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.117278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.117366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.117393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.117534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.117561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.117645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.117671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.117792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.117821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.117915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.117943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.118060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.118086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.118162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.118188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.118304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.118331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.118484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.118528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.118643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.118671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.118765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.118794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.118884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.118911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.119028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.119055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.119140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.119168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.119287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.119313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.119397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.119424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.119566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.119593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.119724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.119752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.119848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.119875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.119996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.120023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.120115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.120143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.120231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.120259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.120371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.120398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.120539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.120566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.120669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.120716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.120820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.120848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.120944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.120972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.121057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.121085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.121169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.121196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.121280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.121307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.121392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.121419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.121561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.121601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.121737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.121777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.121909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.121937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.122088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.122114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.122198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.122229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.122373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.122399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.122473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.122499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.122597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.122626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.122762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.122792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.122884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.122912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.853 [2024-11-19 03:16:44.123036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.853 [2024-11-19 03:16:44.123063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.853 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.123168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.123194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.123306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.123333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.123448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.123476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.123592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.123620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.123745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.123772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.123887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.123914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.124005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.124032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.124121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.124148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.124278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.124304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.124378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.124405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.124522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.124548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.124666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.124699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.124855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.124894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.125023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.125051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.125170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.125197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.125271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.125297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.125415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.125441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.125558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.125585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.125703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.125731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.125815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.125842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.125933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.125960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.126112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.126139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.126231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.126272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.126368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.126395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.126516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.126543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.126625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.126651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.126776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.126803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.126882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.126908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.126997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.127025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.127137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.127163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.127245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.127273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.127387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.127413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.127498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.127525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.127610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.854 [2024-11-19 03:16:44.127642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.854 qpair failed and we were unable to recover it. 00:35:33.854 [2024-11-19 03:16:44.127770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.127797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.127922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.127953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.128098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.128126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.128217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.128245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.128332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.128359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.128474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.128501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.128578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.128605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.128716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.128743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.128861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.128888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.128975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.129002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.129145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.129171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.129284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.129311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.129428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.129456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.129606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.129634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.129751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.129792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.129923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.129951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.130035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.130063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.130136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.130163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.130276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.130304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.130422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.130451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.130538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.130566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.130658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.130701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.130800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.130827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.130919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.130945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.131031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.131059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.131147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.131175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.131272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.131305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.131426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.131453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.131563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.131590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.131706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.131734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.131829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.131856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.131943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.131981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.132066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.132093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.132202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.132228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.132346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.132374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.132463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.132490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.132576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.132602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.132699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.132725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.132816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.855 [2024-11-19 03:16:44.132842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.855 qpair failed and we were unable to recover it. 00:35:33.855 [2024-11-19 03:16:44.132922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.132948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.133081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.133107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.133220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.133247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.133395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.133424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.133534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.133562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.133652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.133698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.133789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.133816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.133925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.133952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.134094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.134121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.134204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.134231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.134320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.134347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.134467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.134494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.134583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.134611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.134713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.134742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.134837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.134866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.134982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.135009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.135102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.135128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.135247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.135274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.135417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.135445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.135565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.135593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.135695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.135724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.135840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.135868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.135961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.135993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.136103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.136129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.136211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.136238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.136357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.136383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.136508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.136548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.136647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.136706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.136797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.136826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.136935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.136963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.137104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.137132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.137251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.137278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.137358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.137385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.137476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.137503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.137615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.137642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.137766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.137794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.137879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.137906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.137998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.138026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.138112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.138139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.138257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.138284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.138394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.138421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.138517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.856 [2024-11-19 03:16:44.138545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.856 qpair failed and we were unable to recover it. 00:35:33.856 [2024-11-19 03:16:44.138666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.138714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.138813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.138852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.138967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.138996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.139112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.139139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.139252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.139279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.139364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.139392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.139509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.139538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.139629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.139669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.139806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.139836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.139919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.139946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.140032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.140059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.140201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.140228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.140344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.140377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.140500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.140527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.140644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.140674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.140774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.140801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.140888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.140916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.141008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.141035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.141114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.141141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.141222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.141249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.141341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.141367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.141446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.141472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.141602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.141642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.141735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.141764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.141855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.141881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.141969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.141995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.142117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.142144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.142235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.142264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.142410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.142437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.142558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.142587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.142702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.142730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.142807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.142834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.142918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.142946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.143059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.143085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.143171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.143197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.143307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.143334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.143450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.143477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.143566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.143594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.143696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.143724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.143836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.143877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.143998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.144026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.144107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.857 [2024-11-19 03:16:44.144134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.857 qpair failed and we were unable to recover it. 00:35:33.857 [2024-11-19 03:16:44.144254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.144281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.144392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.144420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.144509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.144536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.144642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.144669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.144766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.144794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.144902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.144928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.145043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.145069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.145159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.145186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.145271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.145300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.145458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.145499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.145625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.145658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.145785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.145812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.145891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.145918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.146056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.146083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.146194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.146221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.146368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.146396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.146509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.146540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.146650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.146678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.146773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.146800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.146882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.146908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.147011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.147051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.147197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.147225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.147302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.147329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.147414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.147440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.147530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.147558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.147648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.147696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.147794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.147822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.147907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.147934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.148051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.148079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.148193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.148222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.148361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.148389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.148474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.148503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.148617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.148644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.148749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.148776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.148865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.148892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.149012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.149040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.149128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.149154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.149238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.149270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.149347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.858 [2024-11-19 03:16:44.149373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.858 qpair failed and we were unable to recover it. 00:35:33.858 [2024-11-19 03:16:44.149484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.149511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.149589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.149615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.149699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.149726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.149813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.149839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.149918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.149944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.150060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.150086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.150193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.150219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.150298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.150324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.150414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.150443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.150541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.150570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.150685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.150721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.150804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.150831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.150948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.150975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.151063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.151089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.151201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.151229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.151322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.151351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.151446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.151475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.151561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.151588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.151672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.151705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.151858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.151884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.151972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.152003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.152120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.152147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.152227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.152254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.152367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.152394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.152486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.152514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.152602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.152630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.152746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.152774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.152875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.152902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.153013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.153040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.153156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.153185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.153270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.153299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.153409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.153437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.153553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.153580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.153664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.153703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.153788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.153815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.153903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.153929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.154012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.154037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.154078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:33.859 [2024-11-19 03:16:44.154112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.154137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.154252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.154286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.154406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.154435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.154554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.154582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.154670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.154705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.154802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.154829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.154946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.154973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.155094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.155121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.155234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.155261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.155378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.155406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.155520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.155546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.155632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.155659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.155752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.155779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.155893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.155921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.156019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.156047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.156148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.156175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.156300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.156328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.156447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.156474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.156573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.156613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.156702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.156730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.156825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.156852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.156942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.156969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.157055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.157082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.157198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.157224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.157366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.157393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.157512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.157543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.157660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.157687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.157784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.157811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.157925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.157953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.158088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.158129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.158224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.158252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.158338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.158365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.158469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.158495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.158574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.158601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.859 [2024-11-19 03:16:44.158705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.859 [2024-11-19 03:16:44.158732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.859 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.158845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.158873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.158995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.159022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.159131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.159158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.159274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.159301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.159386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.159412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.159522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.159562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.159684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.159726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.159842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.159869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.159954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.159980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.160067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.160094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.160178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.160205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.160326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.160354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.160441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.160468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.160556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.160588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.160713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.160742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.160863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.160893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.160989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.161016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.161154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.161180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.161263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.161290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.161408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.161436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.161589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.161617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.161709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.161739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.161866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.161894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.162010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.162037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.162177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.162203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.162294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.162322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.162417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.162444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.162543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.162572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.162658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.162684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.162776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.162803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.162916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.162942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.163035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.163062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.163153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.163179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.163276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.163304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.163397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.163425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.163532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.163559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.163665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.163700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.163841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.163868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.163955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.163983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.164074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.164101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.164189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.164216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.164290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.164317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.164434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.164462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.164586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.164613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.164741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.164782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.164905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.164934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.165029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.165056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.165150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.165177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.165272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.165300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.165443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.165483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.165578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.165605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.165684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.165717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.165798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.165824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.165905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.165932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.166047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.166074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.166198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.166224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.166357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.166387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.166489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.166518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.166640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.166669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.166795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.166823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.166947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.166974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.167056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.167082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.167201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.167227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.167364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.167390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.167485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.167526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.167647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.167675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.167797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.167823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.167968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.167995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.168141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.168167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.168299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.168339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.168472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.168499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.860 qpair failed and we were unable to recover it. 00:35:33.860 [2024-11-19 03:16:44.168594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.860 [2024-11-19 03:16:44.168620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.168737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.168765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.168879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.168911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.169043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.169083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.169177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.169205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.169343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.169370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.169490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.169516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.169660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.169686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.169803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.169829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.169916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.169945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.170032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.170059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.170170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.170196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.170277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.170304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.170400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.170426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.170536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.170575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.170727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.170755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.170873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.170900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.171008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.171034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.171176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.171203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.171318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.171345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.171455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.171481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.171610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.171640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.171784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.171825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.171921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.171950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.172038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.172065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.172152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.172179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.172296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.172322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.172436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.172463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.172565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.172605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.172716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.172748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.172850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.172878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.172994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.173021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.173110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.173138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.173228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.173255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.173369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.173396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.173542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.173573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.173668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.173702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.173825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.173852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.173940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.173968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.174053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.174080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.174164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.174192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.174310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.174337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.174423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.174458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.174572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.174599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.174683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.174719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.174802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.174829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.174945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.174973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.175114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.175141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.175222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.175250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.175346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.175373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.175464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.175494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.175610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.175637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.175754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.175782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.175897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.175924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.176014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.176041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.176120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.176147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.176246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.176273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.176368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.176396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.176477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.176503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.176618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.176646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.176749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.176776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.176895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.176934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.177022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.177050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.177134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.177161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.177276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.861 [2024-11-19 03:16:44.177304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.861 qpair failed and we were unable to recover it. 00:35:33.861 [2024-11-19 03:16:44.177392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.177421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.177509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.177538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.177659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.177696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.177792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.177821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.177949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.177977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.178120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.178147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.178266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.178293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.178380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.178406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.178490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.178517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.178665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.178701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.178793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.178820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.178942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.178969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.179062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.179089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.179176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.179203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.179336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.179377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.179510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.179539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.179653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.179681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.179777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.179804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.179901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.179928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.180012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.180039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.180153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.180180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.180299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.180328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.180419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.180446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.180561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.180590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.180716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.180744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.180865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.180892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.180976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.181004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.181123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.181150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.181238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.181266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.181375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.181402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.181517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.181546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.181646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.181673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.181774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.181803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.181925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.181952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.182046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.182073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.182186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.182214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.182331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.182359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.182453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.182480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.182600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.182629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.182769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.182796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.182917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.182957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.183053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.183081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.183190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.183216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.183298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.183325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.183410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.183443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.183562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.183588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.183707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.183736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.183828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.183855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.183976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.184005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.184088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.184114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.184225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.184252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.184339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.184366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.184484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.184512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.184618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.184658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.184799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.184830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.184948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.184977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.185085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.185112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.185226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.185253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.185353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.185382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.185481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.185523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.185624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.185665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.185796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.185825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.185942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.185969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.186079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.186106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.186240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.186267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.186348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.186374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.186462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.186492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.186597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.186638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.186770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.186800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.186917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.186945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.187061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.187089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.187212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.862 [2024-11-19 03:16:44.187243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.862 qpair failed and we were unable to recover it. 00:35:33.862 [2024-11-19 03:16:44.187333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.187361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.187488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.187528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.187655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.187684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.187778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.187805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.187894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.187921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.188030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.188057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.188146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.188174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.188295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.188323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.188443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.188471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.188610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.188637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.188734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.188762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.188878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.188904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.189017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.189052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.189148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.189176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.189295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.189323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.189416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.189444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.189561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.189588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.189705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.189734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.189850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.189877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.189996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.190023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.190140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.190168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.190280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.190307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.190419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.190446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.190565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.190591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.190715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.190744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.190838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.190865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.190952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.190980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.191093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.191120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.191228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.191254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.191395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.191422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.191521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.191562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.191698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.191728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.191875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.191902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.191989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.192017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.192103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.192130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.192246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.192274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.192363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.192391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.192531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.192571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.192698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.192728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.192842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.192875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.192969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.192997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.193118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.193146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.193258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.193286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.193375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.193404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.193523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.193550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.193643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.193671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.193767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.193795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.193913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.193940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.194053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.194080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.194194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.194222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.194342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.194371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.194503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.194544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.194649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.194678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.194807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.194835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.194920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.194947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.195044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.195072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.195191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.195218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.195310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.195339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.195452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.195478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.195593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.195620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.195706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.195734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.195833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.195860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.195974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.196001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.196086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.196113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.196197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.196224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.863 [2024-11-19 03:16:44.196372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.863 [2024-11-19 03:16:44.196402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.863 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.196526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.196555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.196673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.196719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.196842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.196871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.197022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.197049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.197162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.197189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.197270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.197297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.197376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.197403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.197491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.197519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.197664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.197698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.197788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.197815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.197927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.197954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.198039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.198066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.198150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.198178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.198274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.198307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.198425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.198454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.198542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.198570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.198649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.198676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.198782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.198811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.198903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.198930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.199052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.199079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.199164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.199193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.199308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.199337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.199458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.199485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.199597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.199623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.199703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.199730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.199827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.199853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.199973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.200000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.200152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.200179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.200298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.200327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.200417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.200444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.200562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.200590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.200705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.200733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.200812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.200840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.200922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.200949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.201032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.201059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.201197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.201223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.201316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.201343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.201437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.201465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.201608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.201635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.201741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.201768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.201861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.201889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.201991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.202030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.202116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.202144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.202234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.202262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.202354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.202381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.202469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.202496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.202575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.202602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.202682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.202715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.202804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.202831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.202916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.202944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.203032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.203060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.203148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.203177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.203258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.203285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.203369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.203397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.203490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.203517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.203604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.203600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:33.864 [2024-11-19 03:16:44.203633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b9[2024-11-19 03:16:44.203633] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events0 with addr=10.0.0.2, port=4420 00:35:33.864 at runtime. 00:35:33.864 [2024-11-19 03:16:44.203651] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is theqpair failed and we were unable to recover it. 00:35:33.864 only 00:35:33.864 [2024-11-19 03:16:44.203665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:33.864 [2024-11-19 03:16:44.203676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:33.864 [2024-11-19 03:16:44.203786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.203814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.203895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.203921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.204003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.204029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.204113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.204140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.204230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.204256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.204370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.204398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.204487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.204514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.204624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.204650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.204741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.204769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.204888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.204916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.205031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.205057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.205173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.205200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.205212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:33.864 [2024-11-19 03:16:44.205277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:33.864 [2024-11-19 03:16:44.205291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.864 [2024-11-19 03:16:44.205318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.864 qpair failed and we were unable to recover it. 00:35:33.864 [2024-11-19 03:16:44.205245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:33.864 [2024-11-19 03:16:44.205271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:33.864 [2024-11-19 03:16:44.205408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.205436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.205520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.205545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.205658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.205684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.205775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.205802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.205902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.205929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.206022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.206050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.206162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.206189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.206288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.206317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.206453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.206494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.206594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.206621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.206742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.206770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.206855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.206882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.206966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.206993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.207080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.207106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.207195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.207224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.207331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.207358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.207443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.207471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.207558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.207585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.207699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.207726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.207810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.207838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.207952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.207978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.208070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.208101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.208189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.208217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.208335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.208363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.208475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.208502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.208585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.208611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.208685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.208718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.208800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.208828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.208934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.208963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.209056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.209084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.209201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.209228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.209317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.209344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.209465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.209493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.209574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.209602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.209716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.209743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.209838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.209865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.209944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.209971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.210084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.210111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.210190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.210218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.210302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.210329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.210410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.210437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.210556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.210582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.210729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.210758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.210841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.210870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.210959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.210986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.211087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.211113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.211230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.211255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.211343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.211369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.211447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.211480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.211570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.211597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.211680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.211714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.211803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.211830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.211923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.211950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.212031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.212059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.212168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.212195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.212286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.212314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.212400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.212427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.212543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.212571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.212658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.212684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.212778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.212805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.212889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.212916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.212997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.213024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.213116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.213145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.213235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.213264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.213352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.213379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.213457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.213483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.213592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.213619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.213708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.213736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.213822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.213849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.213941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.213969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.214053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.865 [2024-11-19 03:16:44.214080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.865 qpair failed and we were unable to recover it. 00:35:33.865 [2024-11-19 03:16:44.214195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.214222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.214341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.214367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.214486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.214514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.214629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.214658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.214749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.214777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.214862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.214889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.214973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.214999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.215106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.215132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.215291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.215319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.215401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.215429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.215550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.215578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.215663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.215697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.215782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.215809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.215930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.215957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.216074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.216101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.216207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.216234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.216318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.216346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.216430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.216458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.216551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.216577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.216658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.216684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.216772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.216798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.216888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.216914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.216993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.217019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.217102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.217129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.217222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.217251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.217362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.217389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.217486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.217527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.217622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.217651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.217766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.217795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.217880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.217907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.217994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.218022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.218145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.218171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.218262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.218288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.218370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.218396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.218478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.218504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.218616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.218645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.218844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.218872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.218998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.219028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.219144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.219171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.219254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.219281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.219359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.219386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.219497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.219525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.219665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.219698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.219781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.219808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.219886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.219916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.220013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.220041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.220130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.220157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.220247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.220275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.220367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.220395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.220502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.220529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.220607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.220635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.220756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.220784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.220870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.220896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.221005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.221031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.221113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.221139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.221237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.221266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.221362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.221391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.221472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.221499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.221591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.221618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.221734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.221762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.221840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.221867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.221979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.222006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.222088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.222115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.222200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.222227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.222311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.222338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.222426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.222452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.222539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.222566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.222647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.866 [2024-11-19 03:16:44.222674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.866 qpair failed and we were unable to recover it. 00:35:33.866 [2024-11-19 03:16:44.222762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.222790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.222878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.222906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.223021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.223050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.223132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.223164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.223256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.223283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.223365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.223392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.223487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.223514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.223601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.223629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.223742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.223769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.223908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.223935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.224050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.224076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.224198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.224225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.224318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.224346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.224444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.224472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.224583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.224609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.224684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.224719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.224812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.224839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.224952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.224979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.225058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.225085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.225174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.225213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.225317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.225357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.225506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.225534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.225623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.225649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.225782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.225809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.225890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.225918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.226038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.226065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.226149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.226176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.226261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.226288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.226396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.226422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.226541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.226570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.226660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.226701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.226827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.226856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.226938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.226964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.227071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.227098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.227181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.227207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.227328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.227354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.227441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.227481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.227568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.227596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.227722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.227751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.227841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.227868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.227946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.227973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.228090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.228117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.228197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.228224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.228337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.228368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.228448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.228477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.228566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.228594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.228696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.228737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.228828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.228856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.228950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.228979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.229070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.229097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.229185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.229213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.229297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.229323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.229431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.229458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.229539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.229567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.229647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.229676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.229812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.229841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.229923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.229950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.230043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.230070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.230162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.230190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.230306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.230333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.230416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.230444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.230579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.230618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.230725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.230754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.230856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.230882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.230995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.231023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.231110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.231144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.231228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.231254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.231333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.231359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.231466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.231507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.231603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.231631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.231720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.231750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.867 [2024-11-19 03:16:44.231862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.867 [2024-11-19 03:16:44.231888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.867 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.231966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.231993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.232076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.232102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.232178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.232204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.232295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.232321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.232430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.232456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.232563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.232589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.232718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.232758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.232850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.232878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.233025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.233052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.233134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.233165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.233271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.233298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.233417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.233445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.233577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.233605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.233728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.233759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.233856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.233885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.233969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.233995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.234077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.234104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.234213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.234240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.234323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.234352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.234443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.234471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.234593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.234620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.234707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.234734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.234815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.234842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.234956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.234983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.235067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.235093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.235181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.235210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.235329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.235357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.235435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.235461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.235547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.235574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.235658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.235685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.235786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.235814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.235893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.235919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.236039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.236065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.236176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.236202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.236285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.236314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.236400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.236428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.236514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.236540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.236624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.236651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.236743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.236775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.236862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.236891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.236981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.237008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.237094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.237121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.237213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.237240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.237322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.237350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.237432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.237460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.237582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.237608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.237744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.237772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.237865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.237892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.237980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.238009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.238092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.238118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.238204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.238230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.238341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.238368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.238467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.238495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.238590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.238618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.238703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.238731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.238807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.238834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.238947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.238974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.239087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.239114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.239203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.239229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.239345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.239374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.239502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.239542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.239644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.239672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.239769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.239797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.239880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.239907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.239992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.240019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.240144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.240172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.868 [2024-11-19 03:16:44.240253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.868 [2024-11-19 03:16:44.240279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.868 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.240359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.240388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.240468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.240495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.240601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.240629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.240724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.240752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.240832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.240861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.240979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.241006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.241120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.241146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.241267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.241293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.241381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.241409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.241512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.241541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.241635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.241663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.241768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.241795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.241892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.241919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.242065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.242091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.242181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.242208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.242328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.242356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.242444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.242472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.242563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.242591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.242674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.242710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.242807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.242833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.242918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.242945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.243033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.243059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.243143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.243171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.243261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.243288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.243375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.243403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.243523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.243550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.243644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.243673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.243806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.243834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.243916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.243943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.244049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.244075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.244152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.244179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.244254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.244282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.244366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.244395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.244506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.244547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.244666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.244702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.244819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.244846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.244933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.244960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.245071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.245098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.245181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.245212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.245335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.245363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.245478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.245519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.245601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.245629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.245728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.245756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.245839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.245865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.245981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.246007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.246124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.246151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.246226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.246253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.246330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.246357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.246446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.246474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.246592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.246620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.246757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.246789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.246880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.246906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.247002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.247033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.247125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.247152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.247267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.247295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.247379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.247408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.247495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.247521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.247597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.247624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.247740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.247767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.247852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.247878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.247963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.247990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.248064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.248090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.248201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.248227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.248313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.248342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.248463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.248489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.248604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.248635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.248714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.248742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.248861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.248888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.248995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.249021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.249131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.249159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.249232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.249259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.249353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.869 [2024-11-19 03:16:44.249382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.869 qpair failed and we were unable to recover it. 00:35:33.869 [2024-11-19 03:16:44.249461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.249489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.249564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.249591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.249710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.249737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.249822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.249850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.249938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.249968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.250083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.250109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.250198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.250225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.250314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.250341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.250419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.250446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.250559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.250585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.250676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.250714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.250806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.250835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.250958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.250987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.251103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.251130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.251243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.251269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.251351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.251378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.251467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.251493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.251583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.251610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.251707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.251735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.251848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.251875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.251975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.252003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.252121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.252147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.252237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.252264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.252350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.252379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.252460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.252488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.252576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.252603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.252718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.252747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.252838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.252864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.252945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.252972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.253087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.253116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.253209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.253236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.253314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.253342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.253459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.253493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.253587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.253618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.253762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.253789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.253874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.253900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.253979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.254005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.254083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.254110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.254218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.254247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.254370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.254399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.254520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.254547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.254638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.254664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.254757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.254784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.254866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.254893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.254977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.255004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.255096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.255122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.255231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.255258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.255378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.255403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.255496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.255525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.255635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.255662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.255791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.255820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.255904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.255931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.256022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.256049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.256162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.256188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.256270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.256298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.256411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.256437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.256518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.256547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.256631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.256659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.256741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.256768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.256859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.256886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.256970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.256997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.257118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.257145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.257260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.257289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.257369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.257397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.257498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.257538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.257637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.257666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.257763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.870 [2024-11-19 03:16:44.257790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.870 qpair failed and we were unable to recover it. 00:35:33.870 [2024-11-19 03:16:44.257904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.257931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.258007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.258033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.258121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.258148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.258236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.258264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.258383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.258409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.258498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.258527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.258614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.258648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.258782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.258811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.258893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.258919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.259036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.259062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.259139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.259166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.259252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.259279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.259393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.259421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.259507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.259535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.259619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.259648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.259740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.259768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.259861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.259888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.259975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.260002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.260117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.260145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.260237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.260265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.260352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.260379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.260472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.260500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.260643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.260669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.260765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.260792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.260906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.260934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.261020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.261048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.261128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.261155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.261272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.261299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.261395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.261422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.261502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.261529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.261652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.261679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.261767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.261793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.261891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.261932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.262021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.262056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.262145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.262174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.262284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.262311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.262428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.262455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.262537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.262564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.262667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.262701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.262829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.262857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.262942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.262969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.263113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.263140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.263222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.263249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.263361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.263389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.263471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.263498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.263608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.263648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.263744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.263773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.263868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.263897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.264016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.264044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.264128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.264154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.264249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.264277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.264393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.264422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.264534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.264561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.264642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.264668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.264765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.264793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.264899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.264939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.265033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.265061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.265152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.265179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.265269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.265296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.265374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.265403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.265493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.265521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.265612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.265640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.265724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.265752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.265832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.265860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.265950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.265977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.266060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.266088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.266176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.266204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.266291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.266320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.266404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.266431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.266545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.266572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.266663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.266695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.871 qpair failed and we were unable to recover it. 00:35:33.871 [2024-11-19 03:16:44.266809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.871 [2024-11-19 03:16:44.266836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.266915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.266942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.267062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.267095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.267181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.267209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.267304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.267333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.267417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.267445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.267533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.267561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.267646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.267673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.267769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.267796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.267888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.267915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.268026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.268052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.268139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.268167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.268312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.268340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.268424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.268454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.268569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.268596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.268676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.268717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.268819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.268846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.268939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.268967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.269052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.269090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.269211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.269237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.269317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.269343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.269432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.269458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.269541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.269569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.269652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.269679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.269774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.269801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.269889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.269915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.270030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.270057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.270151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.270180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.270270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.270299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.270385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.270413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.270504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.270531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.270623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.270649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.270752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.270779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.270867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.270894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.270970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.270998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.271105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.271132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.271209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.271237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.271337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.271365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.271496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.271537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.271640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.271669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.271762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.271789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.271883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.271910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.272013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.272041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.272143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.272170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.272250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.272276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.272356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.272383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.272498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.272524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.272619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.272645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.272786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.272813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.272896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.272922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.273039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.273065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.273151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.273178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.273278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.273318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.273413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.273442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.273527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.273554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.273661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.273695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.273785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.273813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.273936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.273963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.274076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.274103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.274191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.274219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.274317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.274357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.274446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.274474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.274559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.274586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.274670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.274703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.274785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.274812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.274908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.274938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.275051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.275078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.275155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.275182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.275262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.275290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.275379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.275411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.275502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.275533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.275616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.872 [2024-11-19 03:16:44.275644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.872 qpair failed and we were unable to recover it. 00:35:33.872 [2024-11-19 03:16:44.275777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.275806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.275895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.275922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.276002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.276029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.276102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.276128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.276221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.276261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.276392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.276420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.276551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.276579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.276666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.276700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.276794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.276821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.276908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.276935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.277055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.277082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.277215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.277242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.277329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.277356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.277497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.277538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.277672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.277713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.277801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.277828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.277914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.277941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.278035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.278062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.278150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.278178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.278308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.278336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.278435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.278466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.278553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.278581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.278668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.278717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.278799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.278827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.278946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.278979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.279075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.279102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.279191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.279220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.279294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.279321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.279441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.279469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.279556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.279584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.279673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.279709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.279800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.279827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.279935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.279963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.280040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.280067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.280177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.280204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.280326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.280354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.280452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.280480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.280562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.280590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.280687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.280723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.280815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.280842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.280930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.280957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.281067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.281094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.281171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.281199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.281321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.281350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.281436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.281464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.281554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.281580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.281662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.281695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.281781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.281809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.281889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.873 [2024-11-19 03:16:44.281916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.873 qpair failed and we were unable to recover it. 00:35:33.873 [2024-11-19 03:16:44.282005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.282036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.282125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.282153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.282255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.282285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.282379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.282407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.282523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.282550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.282632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.282659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.282795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.282825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.282909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.282937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.283056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.283084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.283210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.283237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.283334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.283361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.283446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.283473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.283557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.283584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.283672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.283713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.283830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.283858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.283982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.284015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.284101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.284128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.284246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.284274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.284354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.284382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.284483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.284511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.284628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.284655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.284740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.284769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.284859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.284887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.284968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.285000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.285115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.285143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.285237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.285264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.285347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.285380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.285476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.285516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.285614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.285642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.285749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.285777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.285866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.285894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.286010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.286036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.286154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.286181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.286273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.286302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.286433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.286473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.286562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.286590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.286702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.286730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.286842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.286869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.286959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.286987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.287131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.287158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.287276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.287305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.287387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.287415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.287503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.287531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.287633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.287660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.287789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.287817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.287904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.287931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.288016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.288042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.288123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.288150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.288248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.288275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.288363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.288401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.288487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.288516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.288599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.288627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.288745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.288775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.288857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.288884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.288996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.289023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.289141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.289169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.289296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.874 [2024-11-19 03:16:44.289324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.874 qpair failed and we were unable to recover it. 00:35:33.874 [2024-11-19 03:16:44.289408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.289437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.289536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.289565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.289679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.289714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.289800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.289827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.289943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.289977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.290072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.290099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.290178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.290205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.290319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.290346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.290431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.290457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.290547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.290574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.290658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.290687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.290809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.290837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.290928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.290957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.291086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.291121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.291212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.291241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.291332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.291359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.291443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.291471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.291564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.291590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.291705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.291735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.291825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.291854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.291952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.291980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.292065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.292092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.292220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.292248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.292335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.292363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.292455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.292483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.292570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.292601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.292721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.292749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.292868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.292895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.292977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.293005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.293120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.293147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.293232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.293258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.293351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.293378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.293471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.293498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.293615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.293656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.293763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.293793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.293913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.293940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.294076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.294103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.294247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.294273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.294364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.294393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.294540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.294581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.294707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.294736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.294829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.294856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.294936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.294962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.295065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.295091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.295212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.295239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.295358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.295386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.295473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.295500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.295578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.295610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.295706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.295734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.295829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.295869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.296004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.296034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.296128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.296156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.296254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.296282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.296402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.296431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.296538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.296565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.296642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.296669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.296767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.296795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.875 [2024-11-19 03:16:44.296882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.875 [2024-11-19 03:16:44.296909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.875 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.296994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.297021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.297119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.297147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.297226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.297253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.297362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.297389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.297508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.297536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.297621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.297648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.297741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.297769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.297848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.297875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.297977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.298003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.298098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.298128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.298212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.298248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.298379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.298418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.298518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.298546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.298635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.298662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.298759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.298786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.298878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.298904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.298996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.299023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.299124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.299150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.299228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.299256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.299341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.299371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.299516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.299545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.299647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.299675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.299767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.299794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.299879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.299907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.299993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.300020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.300132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.300159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.300242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.300268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.300358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.300385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.300464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.300492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.300575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.300605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.300701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.300729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.300811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.300839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.300948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.300975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.301113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.301140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.301226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.301258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.301356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.301385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.301501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.301531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.301621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.301659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.301772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.301799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.301877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.301904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.301988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.302015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.302109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.302137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.302235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.302264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.302349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.302378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.302496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.302524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.302633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.302660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.302746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.302773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.302853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.302880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.302967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.303001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.876 [2024-11-19 03:16:44.303090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.876 [2024-11-19 03:16:44.303119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.876 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.303228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.303255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.303369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.303408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.303492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.303521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.303640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.303667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.303767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.303796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.303883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.303910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.304004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.304031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.304144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.304180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.304291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.304318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.304411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.304439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.304541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.304569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.304649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.304698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.304785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.304811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.304893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.304921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.305003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.305029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.305149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.305176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.305258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.305294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.305411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.305438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.305525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.305553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.305643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.305670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.305776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.305804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.305893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.305920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.306001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.306029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.306125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.306151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.306270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.306297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.306382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.306410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.306501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.306529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.306622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.306651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.306754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.306782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.306867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.306895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.306994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.307021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.307145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.307171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.307252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.307279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.307422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.307448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.307526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.307554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.307631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.307657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.307751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.307780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.307864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.307891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.307991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.308019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.308099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.308125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.308217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.308243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.308330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.308357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.308446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.308484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.308600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.308627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.308731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.308760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.308848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.308875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.308963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.308989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.309084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.309110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.309190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.309218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.309312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.309338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.309457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.309484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.309567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.309593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.309705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.309735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.309817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.877 [2024-11-19 03:16:44.309846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.877 qpair failed and we were unable to recover it. 00:35:33.877 [2024-11-19 03:16:44.309927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.309954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.310064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.310091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.310198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.310225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.310310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.310339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.310428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.310457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.310549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.310576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.310694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.310721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.310808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.310834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.310929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.310957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.311056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.311084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.311195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.311222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.311323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.311351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.311432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.311459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.311542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.311578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.311658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.311701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.311784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.311811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.311919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.311946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.312064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.312091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.312218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.312245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.312321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.312347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.312436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.312464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.312551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.312579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.312675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.312723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.312816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.312845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.312924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.312957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.313044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.313072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.313213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.313240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.313325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.313354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.313435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.313471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.313553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.313580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.313659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.313687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.313807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.313835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.313921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.313948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.314046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.314083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.314170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.314197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.314317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.314345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.314470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.314510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.314591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.314619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.314708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.314735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.314821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.314847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.314925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.314951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.315034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.315061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.315170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.315199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.315276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.315302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.315420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.315448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.315531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.315565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.315677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.315712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.315803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.315830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.315910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.315937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.316017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.316043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.316143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.316172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.316289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.316321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.316437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.316477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.316562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.316590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.316679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.316719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.316803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.316829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.316908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.316934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.878 [2024-11-19 03:16:44.317025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.878 [2024-11-19 03:16:44.317051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.878 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.317166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.317193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.317287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.317315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.317395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.317421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.317501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.317526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.317636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.317663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.317784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.317812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.317894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.317923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.318041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.318069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.318170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.318211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.318310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.318338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.318429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.318456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.318541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.318567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.318679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.318714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.318827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.318854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.318935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.318962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.319051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.319078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.319199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.319229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.319317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.319344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.319436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.319464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.319558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.319586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.319675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.319714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.319836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.319863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.319952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.319980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.320094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.320121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.320214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.320241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.320336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.320363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.320441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.320468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.320549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.320578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.320676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.320709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.320789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.320816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.320911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.320938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.321045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.321072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.321159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.321186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.321288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.321325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.321440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.321472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.321615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.321643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.321731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.321759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.321844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.321871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.321967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.322011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.322140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.322168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.322250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.322278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.322361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.322388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.322465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.322491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.322592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.322632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.322737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.322765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.322887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.322915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.323012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.323040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.323130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.879 [2024-11-19 03:16:44.323165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.879 qpair failed and we were unable to recover it. 00:35:33.879 [2024-11-19 03:16:44.323244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.323270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.323375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.323402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.323522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.323551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.323633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.323661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.323763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.323792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.323876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.323904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.323992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.324018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.324108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.324135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.324221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.324250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.324365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.324393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.324476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.324503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.324583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.324610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.324700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.324729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.324826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.324855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.324944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.324972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.325062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.325088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.325206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.325232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.325372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.325399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.325487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.325513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.325587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.325614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.325730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.325759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.325841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.325870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.325954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.325983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.326094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.326120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.326206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.326232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.326368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.326394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.326477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.326509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.326591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.326617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.326700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.326728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.326809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.326835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.326926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.326953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.327046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.327078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.327164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.327192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.327278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.327306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.327438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.327466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.327550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.327576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.327716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.327743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.327824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.327850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.327933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.327962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.328060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.328086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.328178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.328214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.328302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.328330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.328415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.328444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.328517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.328546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.328631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.328659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.328762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.328788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.328864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.328893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.328987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.329014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.329094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.329129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.329211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.329239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.329355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.329383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.329470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.329499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.329611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.329643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.329768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.880 [2024-11-19 03:16:44.329795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.880 qpair failed and we were unable to recover it. 00:35:33.880 [2024-11-19 03:16:44.329887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.329914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.329999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.330026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.330112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.330140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.330227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.330258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.330348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.330376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.330496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.330522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.330634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.330661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.330752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.330781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.330862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.330888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.330962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.330988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.331064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.331090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.331235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.331264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.331367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.331394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.331482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.331511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.331622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.331648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.331744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.331772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.331877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.331904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.332005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.332044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.332156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.332186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.332302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.332330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.332407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.332435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.332530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.332570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.332681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.332716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.332802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.332829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.332909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.332936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.333025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.333064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.333153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.333180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.333263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.333302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.333416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.333442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.333535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.333564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.333695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.333723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.333833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.333870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.333948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.333975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.334055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.334082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.334170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.334207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.334315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.334342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.334451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.334478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.334560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.334587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.334681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.334721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.334839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.334866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.334952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.334979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.335089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.335116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.335205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.335234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.335314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.335340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.335426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.335454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.335561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.335588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.335681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.335727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.335818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.335846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.335932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.335959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.336051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.336078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.336170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.336196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.336285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.336313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.336444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.336473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.336558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.336587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.336678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.336714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.336799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.336826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.881 [2024-11-19 03:16:44.336910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.881 [2024-11-19 03:16:44.336937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.881 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.337016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.337042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.337120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.337146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.337239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.337265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.337374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.337401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.337541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.337568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.337662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.337697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.337788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.337816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.337903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.337931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.338010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.338040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.338161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.338188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.338311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.338337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.338416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.338444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.338551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.338577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.338665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.338713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.338806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.338835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.338918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.338947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.339043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.339071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.339148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.339174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.339281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.339308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.339393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.339419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.339507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.339534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.339611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.339637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.339757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.339784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.339862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.339889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.339968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.340006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.340092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.340117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.340205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.340233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.340319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.340348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.340453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.340482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.340587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.340627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.340744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.340773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.340862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.340894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.341051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.341078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.341171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.341197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.341275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.341301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.341389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.341417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.341545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.341574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.341657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.341700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.341785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.341812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.341899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.341925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.342045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.342071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.342149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.342175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.342265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.342292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.342410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.342437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.342522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.342548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.342632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.342658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.342757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.342783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.342861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.342890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.342979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.343010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.343131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.343158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.343254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.882 [2024-11-19 03:16:44.343281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.882 qpair failed and we were unable to recover it. 00:35:33.882 [2024-11-19 03:16:44.343367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.343393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.343474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.343501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.343582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.343608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.343730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.343758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.343841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.343869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.343954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.343983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.344065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.344091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.344174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.344202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.344280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.344308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.344418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.344444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.344542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.344568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.344657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.344683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.344769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.344795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.344874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.344900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.344990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.345018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.345112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.345138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.345236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.345264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.345346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.345373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.345461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.345501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.345585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.345614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.345734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.345763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.345869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.345896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.345991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.346018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.346105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.346132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.346230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.346262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.346351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.346377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.346461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.346489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.346575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.346602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.346678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.346718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.346794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.346821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.346898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.346925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.347038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.347064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.347142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.347168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.347275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.347325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.347418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.347446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.347535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.347562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.347639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.347674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.347780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.347807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.347899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.347927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.348012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.348038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.348118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.348144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.348228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.348256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.348335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.348361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.348476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.348503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.348594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.348622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.348708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.348734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.348825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.348852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.348963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.348995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.349109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.349135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.349247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.349276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.349372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.349398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 A controller has encountered a failure and is being reset. 00:35:33.883 [2024-11-19 03:16:44.349501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.349529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.349621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.349648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.349766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.349793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.349868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.349894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.883 [2024-11-19 03:16:44.350009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.883 [2024-11-19 03:16:44.350035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.883 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.350136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.350167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.350254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.350281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.350366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.350395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.350494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.350521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.350634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.350662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.350767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.350807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.350897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.350925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.351009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.351043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.351189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.351222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.351317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.351343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.351423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.351451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.351537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.351563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.351648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.351675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.351765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.351791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.351880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.351907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.352022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.352049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.352126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.352159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.352262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.352288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.352369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.352396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.352475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.352502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.352611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.352638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.352721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.352748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.352833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.352862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.352941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.352967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.353085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.353112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.353243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.353270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.353391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.353418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.353512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.353539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.353620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.353647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.353753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.353793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.353890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.353918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.354033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.354070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.354159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.354186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.354279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.354305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.354398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.354424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.354510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.354541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.354668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.354720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.354824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.354854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.354973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.355004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.355086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.355113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.355187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.355215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.355290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.355316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.355405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.355433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.355524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.355551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.355643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.355671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.355779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.355808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.355890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.355920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.356007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.356034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.356125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.356152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.356271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.356297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.356420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.356447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.356529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.356555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.356637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.356669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.356783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.356823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.356915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.356944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.357060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.884 [2024-11-19 03:16:44.357086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.884 qpair failed and we were unable to recover it. 00:35:33.884 [2024-11-19 03:16:44.357175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.357202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.357303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.357329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.357416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.357455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.357542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.357569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.357648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.357674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.357792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.357821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b70000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.357901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.357929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.358030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.358059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.358137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.358164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.358242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.358268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.358348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.358386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.358458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.358485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.358589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.358615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.358705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.358732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.358814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.358840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.358951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.358987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.359064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.359091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.359180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.359207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.359328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.359356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.359432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.359464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.359548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.359575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.359698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.359726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.359817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.359845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.359931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.359958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.360047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.360074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.360195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.360221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.360306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.360333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.360410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.360437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.360525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.360552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.360649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.360676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.360788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.360814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.360896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.360924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.361004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.361032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.361122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.361153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.361231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.361258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.361352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.361379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942b40 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.361475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.361515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b7c000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.361614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.361654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.361760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.361789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.361880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.361908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.361990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.362016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.362128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.362155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.362248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.362274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.362355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.362382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.362462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.362488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.362585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.362612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.362710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.362737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.362840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.362866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.362953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.362981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.363065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.885 [2024-11-19 03:16:44.363092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.885 qpair failed and we were unable to recover it. 00:35:33.885 [2024-11-19 03:16:44.363182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.886 [2024-11-19 03:16:44.363209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b74000b90 with addr=10.0.0.2, port=4420 00:35:33.886 qpair failed and we were unable to recover it. 00:35:33.886 [2024-11-19 03:16:44.363365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.886 [2024-11-19 03:16:44.363422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1950970 with addr=10.0.0.2, port=4420 00:35:33.886 [2024-11-19 03:16:44.363443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1950970 is same with the state(6) to be set 00:35:33.886 [2024-11-19 03:16:44.363469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1950970 (9): Bad file descriptor 00:35:33.886 [2024-11-19 03:16:44.363489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:35:33.886 [2024-11-19 03:16:44.363504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:35:33.886 [2024-11-19 03:16:44.363520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:35:33.886 Unable to reset the controller. 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:34.145 Malloc0 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:34.145 [2024-11-19 03:16:44.451579] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.145 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:34.146 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.146 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:34.146 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.146 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:34.146 [2024-11-19 03:16:44.479860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:34.146 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.146 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:34.146 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.146 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:34.146 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.146 03:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 404113 00:35:35.084 Controller properly reset. 00:35:40.360 Initializing NVMe Controllers 00:35:40.360 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:40.360 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:40.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:40.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:40.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:40.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:40.360 Initialization complete. Launching workers. 00:35:40.360 Starting thread on core 1 00:35:40.360 Starting thread on core 2 00:35:40.360 Starting thread on core 3 00:35:40.360 Starting thread on core 0 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:40.360 00:35:40.360 real 0m10.651s 00:35:40.360 user 0m33.847s 00:35:40.360 sys 0m7.071s 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:40.360 ************************************ 00:35:40.360 END TEST nvmf_target_disconnect_tc2 00:35:40.360 ************************************ 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:40.360 rmmod nvme_tcp 00:35:40.360 rmmod nvme_fabrics 00:35:40.360 rmmod nvme_keyring 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 404519 ']' 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 404519 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 404519 ']' 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 404519 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 404519 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 404519' 00:35:40.360 killing process with pid 404519 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 404519 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 404519 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:40.360 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:35:40.361 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:35:40.361 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:40.361 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:35:40.361 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:40.361 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:40.361 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:40.361 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:40.361 03:16:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:42.301 03:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:42.301 00:35:42.301 real 0m15.616s 00:35:42.301 user 0m59.067s 00:35:42.301 sys 0m9.719s 00:35:42.301 03:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:42.301 03:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:42.301 ************************************ 00:35:42.301 END TEST nvmf_target_disconnect 00:35:42.301 ************************************ 00:35:42.301 03:16:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:42.301 00:35:42.301 real 6m44.895s 00:35:42.301 user 17m32.358s 00:35:42.301 sys 1m29.163s 00:35:42.301 03:16:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:42.301 03:16:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.301 ************************************ 00:35:42.301 END TEST nvmf_host 00:35:42.301 ************************************ 00:35:42.301 03:16:52 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:35:42.301 03:16:52 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:35:42.301 03:16:52 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:42.301 03:16:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:42.301 03:16:52 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:42.301 03:16:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:42.301 ************************************ 00:35:42.301 START TEST nvmf_target_core_interrupt_mode 00:35:42.301 ************************************ 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:42.301 * Looking for test storage... 00:35:42.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:42.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.301 --rc genhtml_branch_coverage=1 00:35:42.301 --rc genhtml_function_coverage=1 00:35:42.301 --rc genhtml_legend=1 00:35:42.301 --rc geninfo_all_blocks=1 00:35:42.301 --rc geninfo_unexecuted_blocks=1 00:35:42.301 00:35:42.301 ' 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:42.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.301 --rc genhtml_branch_coverage=1 00:35:42.301 --rc genhtml_function_coverage=1 00:35:42.301 --rc genhtml_legend=1 00:35:42.301 --rc geninfo_all_blocks=1 00:35:42.301 --rc geninfo_unexecuted_blocks=1 00:35:42.301 00:35:42.301 ' 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:42.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.301 --rc genhtml_branch_coverage=1 00:35:42.301 --rc genhtml_function_coverage=1 00:35:42.301 --rc genhtml_legend=1 00:35:42.301 --rc geninfo_all_blocks=1 00:35:42.301 --rc geninfo_unexecuted_blocks=1 00:35:42.301 00:35:42.301 ' 00:35:42.301 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:42.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.302 --rc genhtml_branch_coverage=1 00:35:42.302 --rc genhtml_function_coverage=1 00:35:42.302 --rc genhtml_legend=1 00:35:42.302 --rc geninfo_all_blocks=1 00:35:42.302 --rc geninfo_unexecuted_blocks=1 00:35:42.302 00:35:42.302 ' 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:42.302 ************************************ 00:35:42.302 START TEST nvmf_abort 00:35:42.302 ************************************ 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:42.302 * Looking for test storage... 00:35:42.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:35:42.302 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:42.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.562 --rc genhtml_branch_coverage=1 00:35:42.562 --rc genhtml_function_coverage=1 00:35:42.562 --rc genhtml_legend=1 00:35:42.562 --rc geninfo_all_blocks=1 00:35:42.562 --rc geninfo_unexecuted_blocks=1 00:35:42.562 00:35:42.562 ' 00:35:42.562 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:42.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.562 --rc genhtml_branch_coverage=1 00:35:42.562 --rc genhtml_function_coverage=1 00:35:42.562 --rc genhtml_legend=1 00:35:42.562 --rc geninfo_all_blocks=1 00:35:42.562 --rc geninfo_unexecuted_blocks=1 00:35:42.562 00:35:42.562 ' 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:42.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.563 --rc genhtml_branch_coverage=1 00:35:42.563 --rc genhtml_function_coverage=1 00:35:42.563 --rc genhtml_legend=1 00:35:42.563 --rc geninfo_all_blocks=1 00:35:42.563 --rc geninfo_unexecuted_blocks=1 00:35:42.563 00:35:42.563 ' 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:42.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.563 --rc genhtml_branch_coverage=1 00:35:42.563 --rc genhtml_function_coverage=1 00:35:42.563 --rc genhtml_legend=1 00:35:42.563 --rc geninfo_all_blocks=1 00:35:42.563 --rc geninfo_unexecuted_blocks=1 00:35:42.563 00:35:42.563 ' 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:42.563 03:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:35:42.563 03:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:44.467 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:44.467 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:35:44.467 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:44.467 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:44.467 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:44.467 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:44.467 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:44.468 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:35:44.726 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:44.726 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:35:44.726 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:35:44.726 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:35:44.726 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:44.727 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:44.727 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:44.727 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:44.727 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:44.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:44.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:35:44.727 00:35:44.727 --- 10.0.0.2 ping statistics --- 00:35:44.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:44.727 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:44.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:44.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:35:44.727 00:35:44.727 --- 10.0.0.1 ping statistics --- 00:35:44.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:44.727 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:44.727 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=407325 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 407325 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 407325 ']' 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:44.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:44.728 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:44.728 [2024-11-19 03:16:55.314592] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:44.728 [2024-11-19 03:16:55.315637] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:44.728 [2024-11-19 03:16:55.315725] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:44.987 [2024-11-19 03:16:55.403590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:44.987 [2024-11-19 03:16:55.454809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:44.987 [2024-11-19 03:16:55.454875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:44.987 [2024-11-19 03:16:55.454910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:44.987 [2024-11-19 03:16:55.454934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:44.987 [2024-11-19 03:16:55.454962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:44.987 [2024-11-19 03:16:55.456845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:44.987 [2024-11-19 03:16:55.456919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:44.987 [2024-11-19 03:16:55.456930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.987 [2024-11-19 03:16:55.548245] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:44.987 [2024-11-19 03:16:55.548464] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:44.987 [2024-11-19 03:16:55.548495] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:44.987 [2024-11-19 03:16:55.548796] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.245 [2024-11-19 03:16:55.661745] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.245 Malloc0 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.245 Delay0 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.245 [2024-11-19 03:16:55.729875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.245 03:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:35:45.245 [2024-11-19 03:16:55.836014] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:47.778 Initializing NVMe Controllers 00:35:47.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:47.778 controller IO queue size 128 less than required 00:35:47.778 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:35:47.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:35:47.778 Initialization complete. Launching workers. 00:35:47.778 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28604 00:35:47.778 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28661, failed to submit 66 00:35:47.778 success 28604, unsuccessful 57, failed 0 00:35:47.778 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:47.778 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.778 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:47.778 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.778 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:35:47.778 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:35:47.778 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:47.778 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:35:47.778 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:47.778 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:35:47.778 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:47.778 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:47.778 rmmod nvme_tcp 00:35:47.778 rmmod nvme_fabrics 00:35:47.778 rmmod nvme_keyring 00:35:47.778 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:47.778 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:35:47.778 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:35:47.779 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 407325 ']' 00:35:47.779 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 407325 00:35:47.779 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 407325 ']' 00:35:47.779 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 407325 00:35:47.779 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:35:47.779 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:47.779 03:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 407325 00:35:47.779 03:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:47.779 03:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:47.779 03:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 407325' 00:35:47.779 killing process with pid 407325 00:35:47.779 03:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 407325 00:35:47.779 03:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 407325 00:35:47.779 03:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:47.779 03:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:47.779 03:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:47.779 03:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:35:47.779 03:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:35:47.779 03:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:47.779 03:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:35:47.779 03:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:47.779 03:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:47.779 03:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.779 03:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:47.779 03:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:49.699 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:49.699 00:35:49.699 real 0m7.417s 00:35:49.699 user 0m9.425s 00:35:49.699 sys 0m2.888s 00:35:49.699 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:49.699 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:49.699 ************************************ 00:35:49.699 END TEST nvmf_abort 00:35:49.699 ************************************ 00:35:49.699 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:35:49.699 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:49.699 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:49.699 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:49.959 ************************************ 00:35:49.959 START TEST nvmf_ns_hotplug_stress 00:35:49.959 ************************************ 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:35:49.959 * Looking for test storage... 00:35:49.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:35:49.959 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:49.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.960 --rc genhtml_branch_coverage=1 00:35:49.960 --rc genhtml_function_coverage=1 00:35:49.960 --rc genhtml_legend=1 00:35:49.960 --rc geninfo_all_blocks=1 00:35:49.960 --rc geninfo_unexecuted_blocks=1 00:35:49.960 00:35:49.960 ' 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:49.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.960 --rc genhtml_branch_coverage=1 00:35:49.960 --rc genhtml_function_coverage=1 00:35:49.960 --rc genhtml_legend=1 00:35:49.960 --rc geninfo_all_blocks=1 00:35:49.960 --rc geninfo_unexecuted_blocks=1 00:35:49.960 00:35:49.960 ' 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:49.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.960 --rc genhtml_branch_coverage=1 00:35:49.960 --rc genhtml_function_coverage=1 00:35:49.960 --rc genhtml_legend=1 00:35:49.960 --rc geninfo_all_blocks=1 00:35:49.960 --rc geninfo_unexecuted_blocks=1 00:35:49.960 00:35:49.960 ' 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:49.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.960 --rc genhtml_branch_coverage=1 00:35:49.960 --rc genhtml_function_coverage=1 00:35:49.960 --rc genhtml_legend=1 00:35:49.960 --rc geninfo_all_blocks=1 00:35:49.960 --rc geninfo_unexecuted_blocks=1 00:35:49.960 00:35:49.960 ' 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:49.960 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:35:49.961 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:49.961 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:49.961 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:49.961 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:49.961 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:49.961 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:49.961 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:49.961 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:49.961 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:49.961 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:49.961 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:35:49.961 03:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:52.503 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:52.504 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:52.504 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:52.504 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:52.504 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:52.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:52.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:35:52.504 00:35:52.504 --- 10.0.0.2 ping statistics --- 00:35:52.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:52.504 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:52.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:52.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:35:52.504 00:35:52.504 --- 10.0.0.1 ping statistics --- 00:35:52.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:52.504 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=409656 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 409656 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 409656 ']' 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:52.504 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:52.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:52.505 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:52.505 03:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:52.505 [2024-11-19 03:17:02.860532] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:52.505 [2024-11-19 03:17:02.861729] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:35:52.505 [2024-11-19 03:17:02.861804] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:52.505 [2024-11-19 03:17:02.932858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:52.505 [2024-11-19 03:17:02.975443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:52.505 [2024-11-19 03:17:02.975518] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:52.505 [2024-11-19 03:17:02.975541] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:52.505 [2024-11-19 03:17:02.975553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:52.505 [2024-11-19 03:17:02.975562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:52.505 [2024-11-19 03:17:02.976996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:52.505 [2024-11-19 03:17:02.977124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:52.505 [2024-11-19 03:17:02.977127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:52.505 [2024-11-19 03:17:03.056283] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:52.505 [2024-11-19 03:17:03.056466] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:52.505 [2024-11-19 03:17:03.056468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:52.505 [2024-11-19 03:17:03.056729] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:52.505 03:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:52.505 03:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:35:52.505 03:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:52.505 03:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:52.505 03:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:52.505 03:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:52.505 03:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:35:52.505 03:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:52.764 [2024-11-19 03:17:03.353909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:52.764 03:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:53.331 03:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:53.331 [2024-11-19 03:17:03.914586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:53.331 03:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:53.590 03:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:35:54.159 Malloc0 00:35:54.159 03:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:54.159 Delay0 00:35:54.159 03:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:54.418 03:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:35:54.676 NULL1 00:35:54.676 03:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:35:55.242 03:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=409960 00:35:55.242 03:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:35:55.242 03:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:55.242 03:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:35:56.180 Read completed with error (sct=0, sc=11) 00:35:56.180 03:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:56.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:56.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:56.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:56.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:56.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:56.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:56.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:56.696 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:56.696 03:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:35:56.696 03:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:35:56.955 true 00:35:56.955 03:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:35:56.955 03:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:57.522 03:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:57.780 03:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:35:57.780 03:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:35:58.347 true 00:35:58.347 03:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:35:58.347 03:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:58.347 03:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:58.605 03:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:35:58.605 03:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:35:58.864 true 00:35:58.864 03:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:35:58.864 03:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:59.433 03:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:59.433 03:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:35:59.433 03:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:35:59.690 true 00:35:59.691 03:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:35:59.691 03:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:00.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:00.624 03:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:00.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:00.882 03:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:00.882 03:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:01.140 true 00:36:01.140 03:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:01.140 03:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:01.708 03:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:01.708 03:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:01.708 03:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:01.967 true 00:36:01.967 03:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:01.967 03:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:02.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:02.900 03:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:02.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:03.157 03:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:03.157 03:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:03.416 true 00:36:03.416 03:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:03.416 03:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:03.674 03:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:03.932 03:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:03.932 03:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:04.190 true 00:36:04.190 03:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:04.190 03:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:05.124 03:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:05.382 03:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:05.382 03:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:05.640 true 00:36:05.640 03:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:05.640 03:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:05.898 03:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:06.155 03:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:06.155 03:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:06.414 true 00:36:06.414 03:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:06.414 03:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:06.672 03:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:06.930 03:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:06.930 03:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:07.188 true 00:36:07.188 03:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:07.189 03:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:08.129 03:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:08.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:08.387 03:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:08.387 03:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:08.645 true 00:36:08.645 03:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:08.645 03:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:08.902 03:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:09.158 03:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:09.158 03:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:09.416 true 00:36:09.416 03:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:09.416 03:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:09.675 03:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:09.933 03:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:09.933 03:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:10.500 true 00:36:10.500 03:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:10.500 03:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:11.435 03:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:11.435 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:11.435 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:11.693 03:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:11.693 03:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:11.951 true 00:36:11.951 03:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:11.951 03:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:12.209 03:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:12.468 03:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:12.468 03:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:12.726 true 00:36:12.726 03:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:12.726 03:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:13.292 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:13.292 03:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:13.859 03:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:13.859 03:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:13.859 true 00:36:13.859 03:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:13.859 03:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:14.118 03:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:14.684 03:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:14.684 03:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:14.684 true 00:36:14.942 03:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:14.942 03:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:15.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:15.769 03:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:15.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:16.027 03:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:16.027 03:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:16.285 true 00:36:16.285 03:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:16.285 03:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:16.544 03:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:16.802 03:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:16.802 03:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:17.073 true 00:36:17.073 03:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:17.073 03:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:17.330 03:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:17.588 03:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:17.588 03:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:17.846 true 00:36:17.846 03:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:17.846 03:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:18.786 03:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:19.045 03:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:19.045 03:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:19.303 true 00:36:19.303 03:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:19.303 03:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:19.561 03:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:19.819 03:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:19.819 03:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:20.079 true 00:36:20.338 03:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:20.338 03:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:20.596 03:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:20.868 03:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:20.868 03:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:21.134 true 00:36:21.134 03:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:21.134 03:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:22.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:22.069 03:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:22.069 03:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:22.069 03:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:22.329 true 00:36:22.588 03:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:22.588 03:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:22.846 03:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:23.104 03:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:23.104 03:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:23.362 true 00:36:23.362 03:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:23.362 03:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:23.620 03:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:23.879 03:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:23.879 03:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:24.137 true 00:36:24.137 03:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:24.137 03:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:25.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:25.074 03:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:25.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:25.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:25.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:25.332 Initializing NVMe Controllers 00:36:25.332 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:25.332 Controller IO queue size 128, less than required. 00:36:25.332 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:25.332 Controller IO queue size 128, less than required. 00:36:25.332 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:25.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:25.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:25.332 Initialization complete. Launching workers. 00:36:25.332 ======================================================== 00:36:25.332 Latency(us) 00:36:25.332 Device Information : IOPS MiB/s Average min max 00:36:25.332 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 822.99 0.40 70968.94 2878.07 1019962.78 00:36:25.332 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9440.18 4.61 13558.57 2209.81 394606.24 00:36:25.332 ======================================================== 00:36:25.332 Total : 10263.16 5.01 18162.21 2209.81 1019962.78 00:36:25.332 00:36:25.332 03:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:25.332 03:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:25.590 true 00:36:25.590 03:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409960 00:36:25.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (409960) - No such process 00:36:25.590 03:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 409960 00:36:25.590 03:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:25.848 03:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:26.107 03:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:26.107 03:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:26.107 03:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:26.107 03:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:26.107 03:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:26.367 null0 00:36:26.367 03:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:26.367 03:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:26.367 03:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:26.628 null1 00:36:26.628 03:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:26.628 03:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:26.628 03:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:26.886 null2 00:36:26.886 03:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:26.886 03:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:26.886 03:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:27.145 null3 00:36:27.145 03:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:27.145 03:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:27.145 03:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:27.405 null4 00:36:27.405 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:27.405 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:27.405 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:27.666 null5 00:36:27.926 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:27.926 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:27.926 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:27.926 null6 00:36:28.185 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:28.185 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:28.185 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:28.444 null7 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:28.444 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 413949 413950 413952 413954 413956 413958 413960 413962 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.445 03:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:28.704 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:28.704 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:28.704 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:28.704 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:28.704 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:28.704 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:28.704 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:28.704 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:28.963 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.963 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.963 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:28.963 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.963 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.963 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.964 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:29.222 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:29.222 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:29.222 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:29.222 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.222 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:29.222 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:29.222 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:29.222 03:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.481 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:29.739 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:29.740 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:29.740 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.740 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:29.740 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:29.998 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:29.998 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:29.998 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:30.256 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.257 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:30.514 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:30.514 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:30.515 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.515 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:30.515 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:30.515 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:30.515 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:30.515 03:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.773 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:31.031 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:31.031 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:31.031 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.031 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:31.031 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:31.031 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:31.031 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:31.031 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.289 03:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:31.547 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:31.547 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:31.547 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:31.547 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:31.547 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.547 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:31.547 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:31.547 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.114 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:32.373 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.373 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:32.373 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:32.373 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:32.373 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:32.373 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:32.373 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:32.373 03:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.631 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:32.890 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.890 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:32.890 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:32.890 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:32.890 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:32.890 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:32.890 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:32.890 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.148 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:33.407 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:33.407 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.407 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:33.407 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:33.407 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:33.407 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:33.407 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:33.407 03:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.665 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:33.924 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:33.924 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.924 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:33.924 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:34.182 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:34.182 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:34.182 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:34.182 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:34.440 rmmod nvme_tcp 00:36:34.440 rmmod nvme_fabrics 00:36:34.440 rmmod nvme_keyring 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:36:34.440 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 409656 ']' 00:36:34.441 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 409656 00:36:34.441 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 409656 ']' 00:36:34.441 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 409656 00:36:34.441 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:36:34.441 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:34.441 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 409656 00:36:34.441 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:34.441 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:34.441 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 409656' 00:36:34.441 killing process with pid 409656 00:36:34.441 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 409656 00:36:34.441 03:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 409656 00:36:34.699 03:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:34.699 03:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:34.700 03:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:34.700 03:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:36:34.700 03:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:36:34.700 03:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:34.700 03:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:36:34.700 03:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:34.700 03:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:34.700 03:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:34.700 03:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:34.700 03:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:36.607 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:36.607 00:36:36.607 real 0m46.863s 00:36:36.607 user 3m16.834s 00:36:36.607 sys 0m21.493s 00:36:36.607 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:36.607 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:36.607 ************************************ 00:36:36.607 END TEST nvmf_ns_hotplug_stress 00:36:36.607 ************************************ 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:36.866 ************************************ 00:36:36.866 START TEST nvmf_delete_subsystem 00:36:36.866 ************************************ 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:36.866 * Looking for test storage... 00:36:36.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:36:36.866 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:36.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.867 --rc genhtml_branch_coverage=1 00:36:36.867 --rc genhtml_function_coverage=1 00:36:36.867 --rc genhtml_legend=1 00:36:36.867 --rc geninfo_all_blocks=1 00:36:36.867 --rc geninfo_unexecuted_blocks=1 00:36:36.867 00:36:36.867 ' 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:36.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.867 --rc genhtml_branch_coverage=1 00:36:36.867 --rc genhtml_function_coverage=1 00:36:36.867 --rc genhtml_legend=1 00:36:36.867 --rc geninfo_all_blocks=1 00:36:36.867 --rc geninfo_unexecuted_blocks=1 00:36:36.867 00:36:36.867 ' 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:36.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.867 --rc genhtml_branch_coverage=1 00:36:36.867 --rc genhtml_function_coverage=1 00:36:36.867 --rc genhtml_legend=1 00:36:36.867 --rc geninfo_all_blocks=1 00:36:36.867 --rc geninfo_unexecuted_blocks=1 00:36:36.867 00:36:36.867 ' 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:36.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.867 --rc genhtml_branch_coverage=1 00:36:36.867 --rc genhtml_function_coverage=1 00:36:36.867 --rc genhtml_legend=1 00:36:36.867 --rc geninfo_all_blocks=1 00:36:36.867 --rc geninfo_unexecuted_blocks=1 00:36:36.867 00:36:36.867 ' 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:36:36.867 03:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:39.401 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:39.402 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:39.402 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:39.402 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:39.402 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:39.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:39.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:36:39.402 00:36:39.402 --- 10.0.0.2 ping statistics --- 00:36:39.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.402 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:39.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:39.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:36:39.402 00:36:39.402 --- 10.0.0.1 ping statistics --- 00:36:39.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.402 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=416711 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 416711 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 416711 ']' 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:39.402 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:39.403 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:39.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:39.403 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:39.403 03:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.403 [2024-11-19 03:17:49.811175] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:39.403 [2024-11-19 03:17:49.812267] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:36:39.403 [2024-11-19 03:17:49.812321] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:39.403 [2024-11-19 03:17:49.885839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:39.403 [2024-11-19 03:17:49.930097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:39.403 [2024-11-19 03:17:49.930162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:39.403 [2024-11-19 03:17:49.930184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:39.403 [2024-11-19 03:17:49.930195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:39.403 [2024-11-19 03:17:49.930205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:39.403 [2024-11-19 03:17:49.934708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:39.403 [2024-11-19 03:17:49.934719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:39.663 [2024-11-19 03:17:50.020893] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:39.663 [2024-11-19 03:17:50.020902] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:39.663 [2024-11-19 03:17:50.021220] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:39.663 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:39.663 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:36:39.663 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:39.663 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:39.663 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.663 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:39.663 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:39.663 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.663 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.663 [2024-11-19 03:17:50.075390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:39.663 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.663 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:39.663 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.663 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.663 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.663 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.664 [2024-11-19 03:17:50.091604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.664 NULL1 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.664 Delay0 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=416848 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:39.664 03:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:36:39.664 [2024-11-19 03:17:50.168661] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:41.572 03:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:41.572 03:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.572 03:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 [2024-11-19 03:17:52.372257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85810 is same with the state(6) to be set 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 [2024-11-19 03:17:52.373549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85e70 is same with the state(6) to be set 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 starting I/O failed: -6 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 [2024-11-19 03:17:52.374021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2994000c40 is same with the state(6) to be set 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.832 Write completed with error (sct=0, sc=8) 00:36:41.832 Read completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Write completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Write completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Write completed with error (sct=0, sc=8) 00:36:41.833 Write completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Write completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Write completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Write completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:41.833 Read completed with error (sct=0, sc=8) 00:36:42.768 [2024-11-19 03:17:53.348402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc935b0 is same with the state(6) to be set 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Write completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Write completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Write completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 [2024-11-19 03:17:53.375376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f299400d7e0 is same with the state(6) to be set 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Write completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Write completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Write completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Write completed with error (sct=0, sc=8) 00:36:42.768 [2024-11-19 03:17:53.375536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f299400d020 is same with the state(6) to be set 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Write completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Write completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 [2024-11-19 03:17:53.376831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc853f0 is same with the state(6) to be set 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Write completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Write completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Write completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Write completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Read completed with error (sct=0, sc=8) 00:36:42.768 Write completed with error (sct=0, sc=8) 00:36:42.768 [2024-11-19 03:17:53.377265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85b40 is same with the state(6) to be set 00:36:42.768 Initializing NVMe Controllers 00:36:42.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:42.768 Controller IO queue size 128, less than required. 00:36:42.768 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:42.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:42.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:42.769 Initialization complete. Launching workers. 00:36:42.769 ======================================================== 00:36:42.769 Latency(us) 00:36:42.769 Device Information : IOPS MiB/s Average min max 00:36:42.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 160.36 0.08 919354.95 902.60 1012383.85 00:36:42.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.87 0.08 921912.14 388.47 1013484.63 00:36:42.769 ======================================================== 00:36:42.769 Total : 319.24 0.16 920627.58 388.47 1013484.63 00:36:42.769 00:36:42.769 [2024-11-19 03:17:53.377764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc935b0 (9): Bad file descriptor 00:36:42.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:36:42.769 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.769 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:36:42.769 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 416848 00:36:42.769 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 416848 00:36:43.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (416848) - No such process 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 416848 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 416848 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 416848 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:43.337 [2024-11-19 03:17:53.899529] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=417261 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417261 00:36:43.337 03:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:43.596 [2024-11-19 03:17:53.958412] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:43.856 03:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:43.856 03:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417261 00:36:43.856 03:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:44.446 03:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:44.446 03:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417261 00:36:44.446 03:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:45.108 03:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:45.108 03:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417261 00:36:45.108 03:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:45.411 03:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:45.411 03:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417261 00:36:45.411 03:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:46.001 03:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:46.001 03:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417261 00:36:46.001 03:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:46.571 03:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:46.571 03:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417261 00:36:46.571 03:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:46.831 Initializing NVMe Controllers 00:36:46.831 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:46.831 Controller IO queue size 128, less than required. 00:36:46.831 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:46.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:46.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:46.831 Initialization complete. Launching workers. 00:36:46.831 ======================================================== 00:36:46.831 Latency(us) 00:36:46.831 Device Information : IOPS MiB/s Average min max 00:36:46.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003493.48 1000161.89 1011492.64 00:36:46.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004672.83 1000179.38 1042851.59 00:36:46.831 ======================================================== 00:36:46.831 Total : 256.00 0.12 1004083.15 1000161.89 1042851.59 00:36:46.831 00:36:46.831 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:46.831 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 417261 00:36:46.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (417261) - No such process 00:36:46.831 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 417261 00:36:46.832 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:36:46.832 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:36:46.832 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:46.832 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:36:46.832 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:46.832 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:36:46.832 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:46.832 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:46.832 rmmod nvme_tcp 00:36:47.090 rmmod nvme_fabrics 00:36:47.090 rmmod nvme_keyring 00:36:47.090 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:47.090 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:36:47.090 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:36:47.090 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 416711 ']' 00:36:47.090 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 416711 00:36:47.090 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 416711 ']' 00:36:47.090 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 416711 00:36:47.090 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:36:47.090 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:47.090 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 416711 00:36:47.090 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:47.090 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:47.090 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 416711' 00:36:47.090 killing process with pid 416711 00:36:47.090 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 416711 00:36:47.090 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 416711 00:36:47.090 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:47.090 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:47.091 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:47.091 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:36:47.091 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:36:47.091 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:47.091 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:36:47.091 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:47.091 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:47.091 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:47.091 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:47.091 03:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:49.628 00:36:49.628 real 0m12.484s 00:36:49.628 user 0m24.724s 00:36:49.628 sys 0m3.896s 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:49.628 ************************************ 00:36:49.628 END TEST nvmf_delete_subsystem 00:36:49.628 ************************************ 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:49.628 ************************************ 00:36:49.628 START TEST nvmf_host_management 00:36:49.628 ************************************ 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:36:49.628 * Looking for test storage... 00:36:49.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:49.628 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:49.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:49.628 --rc genhtml_branch_coverage=1 00:36:49.628 --rc genhtml_function_coverage=1 00:36:49.628 --rc genhtml_legend=1 00:36:49.628 --rc geninfo_all_blocks=1 00:36:49.628 --rc geninfo_unexecuted_blocks=1 00:36:49.628 00:36:49.629 ' 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:49.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:49.629 --rc genhtml_branch_coverage=1 00:36:49.629 --rc genhtml_function_coverage=1 00:36:49.629 --rc genhtml_legend=1 00:36:49.629 --rc geninfo_all_blocks=1 00:36:49.629 --rc geninfo_unexecuted_blocks=1 00:36:49.629 00:36:49.629 ' 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:49.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:49.629 --rc genhtml_branch_coverage=1 00:36:49.629 --rc genhtml_function_coverage=1 00:36:49.629 --rc genhtml_legend=1 00:36:49.629 --rc geninfo_all_blocks=1 00:36:49.629 --rc geninfo_unexecuted_blocks=1 00:36:49.629 00:36:49.629 ' 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:49.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:49.629 --rc genhtml_branch_coverage=1 00:36:49.629 --rc genhtml_function_coverage=1 00:36:49.629 --rc genhtml_legend=1 00:36:49.629 --rc geninfo_all_blocks=1 00:36:49.629 --rc geninfo_unexecuted_blocks=1 00:36:49.629 00:36:49.629 ' 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:36:49.629 03:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:51.536 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:51.536 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:51.536 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:51.536 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:51.536 03:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:51.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:51.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:36:51.536 00:36:51.536 --- 10.0.0.2 ping statistics --- 00:36:51.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:51.536 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:51.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:51.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:36:51.536 00:36:51.536 --- 10.0.0.1 ping statistics --- 00:36:51.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:51.536 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=419720 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 419720 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 419720 ']' 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:51.536 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:51.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:51.537 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:51.537 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:51.537 [2024-11-19 03:18:02.117453] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:51.537 [2024-11-19 03:18:02.118531] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:36:51.537 [2024-11-19 03:18:02.118598] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:51.795 [2024-11-19 03:18:02.191559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:51.795 [2024-11-19 03:18:02.238504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:51.795 [2024-11-19 03:18:02.238558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:51.795 [2024-11-19 03:18:02.238579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:51.795 [2024-11-19 03:18:02.238597] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:51.795 [2024-11-19 03:18:02.238612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:51.795 [2024-11-19 03:18:02.240179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:51.795 [2024-11-19 03:18:02.240242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:51.795 [2024-11-19 03:18:02.240307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:51.795 [2024-11-19 03:18:02.240310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.795 [2024-11-19 03:18:02.322522] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:51.795 [2024-11-19 03:18:02.322740] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:51.795 [2024-11-19 03:18:02.323028] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:51.795 [2024-11-19 03:18:02.323583] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:51.795 [2024-11-19 03:18:02.323857] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:51.795 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:51.795 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:36:51.795 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:51.795 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:51.795 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:51.795 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:51.795 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:51.795 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.795 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:51.795 [2024-11-19 03:18:02.377009] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:51.795 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.795 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:36:51.795 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:51.795 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:51.795 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:51.795 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:52.053 Malloc0 00:36:52.053 [2024-11-19 03:18:02.461345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=419761 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 419761 /var/tmp/bdevperf.sock 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 419761 ']' 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:52.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:52.053 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:52.053 { 00:36:52.053 "params": { 00:36:52.054 "name": "Nvme$subsystem", 00:36:52.054 "trtype": "$TEST_TRANSPORT", 00:36:52.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:52.054 "adrfam": "ipv4", 00:36:52.054 "trsvcid": "$NVMF_PORT", 00:36:52.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:52.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:52.054 "hdgst": ${hdgst:-false}, 00:36:52.054 "ddgst": ${ddgst:-false} 00:36:52.054 }, 00:36:52.054 "method": "bdev_nvme_attach_controller" 00:36:52.054 } 00:36:52.054 EOF 00:36:52.054 )") 00:36:52.054 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:36:52.054 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:36:52.054 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:36:52.054 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:52.054 "params": { 00:36:52.054 "name": "Nvme0", 00:36:52.054 "trtype": "tcp", 00:36:52.054 "traddr": "10.0.0.2", 00:36:52.054 "adrfam": "ipv4", 00:36:52.054 "trsvcid": "4420", 00:36:52.054 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:52.054 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:52.054 "hdgst": false, 00:36:52.054 "ddgst": false 00:36:52.054 }, 00:36:52.054 "method": "bdev_nvme_attach_controller" 00:36:52.054 }' 00:36:52.054 [2024-11-19 03:18:02.538382] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:36:52.054 [2024-11-19 03:18:02.538460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419761 ] 00:36:52.054 [2024-11-19 03:18:02.611341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:52.054 [2024-11-19 03:18:02.659318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:52.630 Running I/O for 10 seconds... 00:36:52.630 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:52.630 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:36:52.630 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:36:52.630 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.630 03:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:52.630 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.630 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:52.630 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:36:52.630 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:36:52.630 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:36:52.630 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:36:52.630 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:36:52.630 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:36:52.630 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:36:52.630 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:36:52.630 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:36:52.630 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.630 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:52.630 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.630 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:36:52.630 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:36:52.630 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=549 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 549 -ge 100 ']' 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:52.895 [2024-11-19 03:18:03.352123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:52.895 [2024-11-19 03:18:03.352200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.895 [2024-11-19 03:18:03.352220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:52.895 [2024-11-19 03:18:03.352234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.895 [2024-11-19 03:18:03.352249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:52.895 [2024-11-19 03:18:03.352262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.895 [2024-11-19 03:18:03.352276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:52.895 [2024-11-19 03:18:03.352300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.895 [2024-11-19 03:18:03.352313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18aed70 is same with the state(6) to be set 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.895 03:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:36:52.895 [2024-11-19 03:18:03.362274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18aed70 (9): Bad file descriptor 00:36:52.895 [2024-11-19 03:18:03.362362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.895 [2024-11-19 03:18:03.362384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.895 [2024-11-19 03:18:03.362410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.362959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.362990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.896 [2024-11-19 03:18:03.363556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.896 [2024-11-19 03:18:03.363575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.363590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.363604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.363619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.363632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.363646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.363660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.363698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.363716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.363736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.363749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.363765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.363779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.363794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.363808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.363823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.363836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.363852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.363870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.363886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.363901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.363916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.363930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.363945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.363959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.363989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.364002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.364020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.364034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.364049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.364062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.364076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.364090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.364104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.364118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.364132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.364146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.364161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.364174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.364189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.364203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.364217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.364234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.364249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.364263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.364278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.364291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.364306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.364320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.364334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.364349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.364364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.897 [2024-11-19 03:18:03.364377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.897 [2024-11-19 03:18:03.365598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:52.897 task offset: 81920 on job bdev=Nvme0n1 fails 00:36:52.897 00:36:52.897 Latency(us) 00:36:52.897 [2024-11-19T02:18:03.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:52.897 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:52.897 Job: Nvme0n1 ended in about 0.41 seconds with error 00:36:52.897 Verification LBA range: start 0x0 length 0x400 00:36:52.897 Nvme0n1 : 0.41 1565.71 97.86 156.57 0.00 36102.93 2560.76 35535.08 00:36:52.897 [2024-11-19T02:18:03.512Z] =================================================================================================================== 00:36:52.897 [2024-11-19T02:18:03.512Z] Total : 1565.71 97.86 156.57 0.00 36102.93 2560.76 35535.08 00:36:52.897 [2024-11-19 03:18:03.367535] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:52.897 [2024-11-19 03:18:03.420067] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:36:53.834 03:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 419761 00:36:53.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (419761) - No such process 00:36:53.834 03:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:36:53.834 03:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:36:53.834 03:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:36:53.834 03:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:36:53.834 03:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:36:53.834 03:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:36:53.834 03:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:53.834 03:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:53.834 { 00:36:53.834 "params": { 00:36:53.834 "name": "Nvme$subsystem", 00:36:53.834 "trtype": "$TEST_TRANSPORT", 00:36:53.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.834 "adrfam": "ipv4", 00:36:53.834 "trsvcid": "$NVMF_PORT", 00:36:53.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.834 "hdgst": ${hdgst:-false}, 00:36:53.834 "ddgst": ${ddgst:-false} 00:36:53.834 }, 00:36:53.834 "method": "bdev_nvme_attach_controller" 00:36:53.834 } 00:36:53.834 EOF 00:36:53.834 )") 00:36:53.834 03:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:36:53.834 03:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:36:53.834 03:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:36:53.834 03:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:53.834 "params": { 00:36:53.834 "name": "Nvme0", 00:36:53.834 "trtype": "tcp", 00:36:53.834 "traddr": "10.0.0.2", 00:36:53.834 "adrfam": "ipv4", 00:36:53.834 "trsvcid": "4420", 00:36:53.834 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:53.834 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:53.834 "hdgst": false, 00:36:53.834 "ddgst": false 00:36:53.834 }, 00:36:53.834 "method": "bdev_nvme_attach_controller" 00:36:53.834 }' 00:36:53.834 [2024-11-19 03:18:04.409747] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:36:53.834 [2024-11-19 03:18:04.409826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420038 ] 00:36:54.092 [2024-11-19 03:18:04.480203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:54.092 [2024-11-19 03:18:04.525851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:54.350 Running I/O for 1 seconds... 00:36:55.287 1664.00 IOPS, 104.00 MiB/s 00:36:55.287 Latency(us) 00:36:55.287 [2024-11-19T02:18:05.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.287 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:55.287 Verification LBA range: start 0x0 length 0x400 00:36:55.287 Nvme0n1 : 1.02 1696.11 106.01 0.00 0.00 37124.33 5534.15 32428.18 00:36:55.287 [2024-11-19T02:18:05.902Z] =================================================================================================================== 00:36:55.287 [2024-11-19T02:18:05.902Z] Total : 1696.11 106.01 0.00 0.00 37124.33 5534.15 32428.18 00:36:55.548 03:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:36:55.548 03:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:36:55.548 03:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:36:55.548 03:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:55.548 03:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:36:55.548 03:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:55.548 03:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:36:55.548 03:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:55.548 03:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:36:55.548 03:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:55.548 03:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:55.548 rmmod nvme_tcp 00:36:55.548 rmmod nvme_fabrics 00:36:55.548 rmmod nvme_keyring 00:36:55.548 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:55.548 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:36:55.548 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:36:55.548 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 419720 ']' 00:36:55.548 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 419720 00:36:55.548 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 419720 ']' 00:36:55.548 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 419720 00:36:55.548 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:36:55.548 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:55.548 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 419720 00:36:55.548 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:55.548 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:55.548 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 419720' 00:36:55.548 killing process with pid 419720 00:36:55.548 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 419720 00:36:55.548 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 419720 00:36:55.808 [2024-11-19 03:18:06.256393] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:36:55.808 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:55.808 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:55.808 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:55.808 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:36:55.808 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:36:55.808 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:55.808 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:36:55.808 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:55.808 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:55.808 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:55.808 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:55.808 03:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:57.712 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:36:57.971 00:36:57.971 real 0m8.542s 00:36:57.971 user 0m17.033s 00:36:57.971 sys 0m3.658s 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:57.971 ************************************ 00:36:57.971 END TEST nvmf_host_management 00:36:57.971 ************************************ 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:57.971 ************************************ 00:36:57.971 START TEST nvmf_lvol 00:36:57.971 ************************************ 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:36:57.971 * Looking for test storage... 00:36:57.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:57.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.971 --rc genhtml_branch_coverage=1 00:36:57.971 --rc genhtml_function_coverage=1 00:36:57.971 --rc genhtml_legend=1 00:36:57.971 --rc geninfo_all_blocks=1 00:36:57.971 --rc geninfo_unexecuted_blocks=1 00:36:57.971 00:36:57.971 ' 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:57.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.971 --rc genhtml_branch_coverage=1 00:36:57.971 --rc genhtml_function_coverage=1 00:36:57.971 --rc genhtml_legend=1 00:36:57.971 --rc geninfo_all_blocks=1 00:36:57.971 --rc geninfo_unexecuted_blocks=1 00:36:57.971 00:36:57.971 ' 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:57.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.971 --rc genhtml_branch_coverage=1 00:36:57.971 --rc genhtml_function_coverage=1 00:36:57.971 --rc genhtml_legend=1 00:36:57.971 --rc geninfo_all_blocks=1 00:36:57.971 --rc geninfo_unexecuted_blocks=1 00:36:57.971 00:36:57.971 ' 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:57.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.971 --rc genhtml_branch_coverage=1 00:36:57.971 --rc genhtml_function_coverage=1 00:36:57.971 --rc genhtml_legend=1 00:36:57.971 --rc geninfo_all_blocks=1 00:36:57.971 --rc geninfo_unexecuted_blocks=1 00:36:57.971 00:36:57.971 ' 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:36:57.971 03:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:00.505 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:00.505 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:00.505 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:00.505 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:00.505 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:00.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:00.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:37:00.506 00:37:00.506 --- 10.0.0.2 ping statistics --- 00:37:00.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:00.506 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:00.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:00.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:37:00.506 00:37:00.506 --- 10.0.0.1 ping statistics --- 00:37:00.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:00.506 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=422734 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 422734 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 422734 ']' 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:00.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:00.506 03:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:00.506 [2024-11-19 03:18:10.833643] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:00.506 [2024-11-19 03:18:10.834695] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:37:00.506 [2024-11-19 03:18:10.834749] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:00.506 [2024-11-19 03:18:10.905147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:00.506 [2024-11-19 03:18:10.949919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:00.506 [2024-11-19 03:18:10.949986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:00.506 [2024-11-19 03:18:10.949999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:00.506 [2024-11-19 03:18:10.950010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:00.506 [2024-11-19 03:18:10.950019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:00.506 [2024-11-19 03:18:10.951501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:00.506 [2024-11-19 03:18:10.951639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:00.506 [2024-11-19 03:18:10.951642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:00.506 [2024-11-19 03:18:11.033832] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:00.506 [2024-11-19 03:18:11.033995] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:00.506 [2024-11-19 03:18:11.034005] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:00.506 [2024-11-19 03:18:11.034253] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:00.506 03:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:00.506 03:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:37:00.506 03:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:00.506 03:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:00.506 03:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:00.506 03:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:00.506 03:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:00.764 [2024-11-19 03:18:11.332360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:00.764 03:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:01.334 03:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:01.334 03:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:01.594 03:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:01.594 03:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:01.854 03:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:02.114 03:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=94138e50-a968-49d0-8032-938651ae978b 00:37:02.114 03:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 94138e50-a968-49d0-8032-938651ae978b lvol 20 00:37:02.373 03:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=80ab4467-e575-46e7-a87c-0f29603272c7 00:37:02.373 03:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:02.631 03:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 80ab4467-e575-46e7-a87c-0f29603272c7 00:37:02.888 03:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:03.146 [2024-11-19 03:18:13.592535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:03.146 03:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:03.404 03:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=423039 00:37:03.404 03:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:03.404 03:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:04.343 03:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 80ab4467-e575-46e7-a87c-0f29603272c7 MY_SNAPSHOT 00:37:04.601 03:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d923fdcb-9c00-4332-8b93-daf130802030 00:37:04.601 03:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 80ab4467-e575-46e7-a87c-0f29603272c7 30 00:37:05.167 03:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d923fdcb-9c00-4332-8b93-daf130802030 MY_CLONE 00:37:05.167 03:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e42daac8-2dd2-4401-b7a3-201ccc6e3a2f 00:37:05.167 03:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e42daac8-2dd2-4401-b7a3-201ccc6e3a2f 00:37:05.734 03:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 423039 00:37:13.856 Initializing NVMe Controllers 00:37:13.856 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:13.856 Controller IO queue size 128, less than required. 00:37:13.856 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:13.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:13.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:13.856 Initialization complete. Launching workers. 00:37:13.856 ======================================================== 00:37:13.856 Latency(us) 00:37:13.856 Device Information : IOPS MiB/s Average min max 00:37:13.856 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10577.20 41.32 12110.78 1898.67 86643.05 00:37:13.856 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10506.90 41.04 12187.74 3922.54 82196.44 00:37:13.856 ======================================================== 00:37:13.856 Total : 21084.10 82.36 12149.13 1898.67 86643.05 00:37:13.856 00:37:13.856 03:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:14.114 03:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 80ab4467-e575-46e7-a87c-0f29603272c7 00:37:14.373 03:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 94138e50-a968-49d0-8032-938651ae978b 00:37:14.632 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:14.633 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:14.633 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:14.633 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:14.633 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:14.633 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:14.633 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:14.633 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:14.633 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:14.633 rmmod nvme_tcp 00:37:14.633 rmmod nvme_fabrics 00:37:14.633 rmmod nvme_keyring 00:37:14.633 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:14.633 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:14.633 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:14.633 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 422734 ']' 00:37:14.633 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 422734 00:37:14.633 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 422734 ']' 00:37:14.633 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 422734 00:37:14.633 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:37:14.633 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:14.892 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 422734 00:37:14.892 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:14.892 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:14.892 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 422734' 00:37:14.892 killing process with pid 422734 00:37:14.892 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 422734 00:37:14.892 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 422734 00:37:15.152 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:15.152 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:15.152 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:15.152 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:15.152 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:37:15.152 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:15.152 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:37:15.152 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:15.152 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:15.152 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:15.152 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:15.152 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:17.062 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:17.062 00:37:17.062 real 0m19.192s 00:37:17.062 user 0m56.640s 00:37:17.062 sys 0m7.739s 00:37:17.062 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:17.062 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:17.062 ************************************ 00:37:17.062 END TEST nvmf_lvol 00:37:17.062 ************************************ 00:37:17.062 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:17.062 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:17.062 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:17.062 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:17.062 ************************************ 00:37:17.062 START TEST nvmf_lvs_grow 00:37:17.062 ************************************ 00:37:17.062 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:17.062 * Looking for test storage... 00:37:17.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:17.062 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:17.062 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:37:17.062 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:17.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.321 --rc genhtml_branch_coverage=1 00:37:17.321 --rc genhtml_function_coverage=1 00:37:17.321 --rc genhtml_legend=1 00:37:17.321 --rc geninfo_all_blocks=1 00:37:17.321 --rc geninfo_unexecuted_blocks=1 00:37:17.321 00:37:17.321 ' 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:17.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.321 --rc genhtml_branch_coverage=1 00:37:17.321 --rc genhtml_function_coverage=1 00:37:17.321 --rc genhtml_legend=1 00:37:17.321 --rc geninfo_all_blocks=1 00:37:17.321 --rc geninfo_unexecuted_blocks=1 00:37:17.321 00:37:17.321 ' 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:17.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.321 --rc genhtml_branch_coverage=1 00:37:17.321 --rc genhtml_function_coverage=1 00:37:17.321 --rc genhtml_legend=1 00:37:17.321 --rc geninfo_all_blocks=1 00:37:17.321 --rc geninfo_unexecuted_blocks=1 00:37:17.321 00:37:17.321 ' 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:17.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.321 --rc genhtml_branch_coverage=1 00:37:17.321 --rc genhtml_function_coverage=1 00:37:17.321 --rc genhtml_legend=1 00:37:17.321 --rc geninfo_all_blocks=1 00:37:17.321 --rc geninfo_unexecuted_blocks=1 00:37:17.321 00:37:17.321 ' 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:17.321 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:17.322 03:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:19.859 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:19.859 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:19.859 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:19.859 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:19.859 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:19.860 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:19.860 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:19.860 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:19.860 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:19.860 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:19.860 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:19.860 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:19.860 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:19.860 03:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:19.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:19.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:37:19.860 00:37:19.860 --- 10.0.0.2 ping statistics --- 00:37:19.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:19.860 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:19.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:19.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:37:19.860 00:37:19.860 --- 10.0.0.1 ping statistics --- 00:37:19.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:19.860 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=426402 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 426402 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 426402 ']' 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:19.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:19.860 [2024-11-19 03:18:30.144577] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:19.860 [2024-11-19 03:18:30.145760] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:37:19.860 [2024-11-19 03:18:30.145820] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:19.860 [2024-11-19 03:18:30.220708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:19.860 [2024-11-19 03:18:30.269738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:19.860 [2024-11-19 03:18:30.269796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:19.860 [2024-11-19 03:18:30.269827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:19.860 [2024-11-19 03:18:30.269839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:19.860 [2024-11-19 03:18:30.269850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:19.860 [2024-11-19 03:18:30.270483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:19.860 [2024-11-19 03:18:30.366042] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:19.860 [2024-11-19 03:18:30.366392] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:19.860 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:20.119 [2024-11-19 03:18:30.679127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:20.119 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:20.119 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:20.119 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:20.119 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:20.119 ************************************ 00:37:20.119 START TEST lvs_grow_clean 00:37:20.119 ************************************ 00:37:20.119 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:37:20.119 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:20.119 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:20.119 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:20.119 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:20.119 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:20.119 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:20.119 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:20.119 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:20.119 03:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:20.688 03:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:20.688 03:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:20.688 03:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0c694c4e-2e56-4d83-b66a-b9b835d0b97b 00:37:20.688 03:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c694c4e-2e56-4d83-b66a-b9b835d0b97b 00:37:20.688 03:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:21.256 03:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:21.256 03:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:21.257 03:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0c694c4e-2e56-4d83-b66a-b9b835d0b97b lvol 150 00:37:21.257 03:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0de472b6-68d1-4758-8516-d2782dd5dbde 00:37:21.257 03:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:21.257 03:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:21.515 [2024-11-19 03:18:32.106999] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:21.515 [2024-11-19 03:18:32.107114] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:21.515 true 00:37:21.515 03:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c694c4e-2e56-4d83-b66a-b9b835d0b97b 00:37:21.516 03:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:22.085 03:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:22.085 03:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:22.085 03:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0de472b6-68d1-4758-8516-d2782dd5dbde 00:37:22.345 03:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:22.603 [2024-11-19 03:18:33.191387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.603 03:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:23.171 03:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=426841 00:37:23.171 03:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:23.171 03:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:23.171 03:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 426841 /var/tmp/bdevperf.sock 00:37:23.171 03:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 426841 ']' 00:37:23.171 03:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:23.171 03:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:23.171 03:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:23.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:23.171 03:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:23.171 03:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:23.171 [2024-11-19 03:18:33.525338] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:37:23.171 [2024-11-19 03:18:33.525435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426841 ] 00:37:23.171 [2024-11-19 03:18:33.591701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.171 [2024-11-19 03:18:33.638356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:23.171 03:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:23.171 03:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:37:23.171 03:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:23.741 Nvme0n1 00:37:23.741 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:24.001 [ 00:37:24.001 { 00:37:24.001 "name": "Nvme0n1", 00:37:24.001 "aliases": [ 00:37:24.001 "0de472b6-68d1-4758-8516-d2782dd5dbde" 00:37:24.001 ], 00:37:24.001 "product_name": "NVMe disk", 00:37:24.001 "block_size": 4096, 00:37:24.001 "num_blocks": 38912, 00:37:24.001 "uuid": "0de472b6-68d1-4758-8516-d2782dd5dbde", 00:37:24.001 "numa_id": 0, 00:37:24.001 "assigned_rate_limits": { 00:37:24.001 "rw_ios_per_sec": 0, 00:37:24.001 "rw_mbytes_per_sec": 0, 00:37:24.001 "r_mbytes_per_sec": 0, 00:37:24.001 "w_mbytes_per_sec": 0 00:37:24.001 }, 00:37:24.001 "claimed": false, 00:37:24.001 "zoned": false, 00:37:24.001 "supported_io_types": { 00:37:24.001 "read": true, 00:37:24.001 "write": true, 00:37:24.001 "unmap": true, 00:37:24.001 "flush": true, 00:37:24.001 "reset": true, 00:37:24.001 "nvme_admin": true, 00:37:24.001 "nvme_io": true, 00:37:24.001 "nvme_io_md": false, 00:37:24.001 "write_zeroes": true, 00:37:24.001 "zcopy": false, 00:37:24.001 "get_zone_info": false, 00:37:24.001 "zone_management": false, 00:37:24.001 "zone_append": false, 00:37:24.001 "compare": true, 00:37:24.001 "compare_and_write": true, 00:37:24.001 "abort": true, 00:37:24.001 "seek_hole": false, 00:37:24.001 "seek_data": false, 00:37:24.001 "copy": true, 00:37:24.001 "nvme_iov_md": false 00:37:24.001 }, 00:37:24.001 "memory_domains": [ 00:37:24.001 { 00:37:24.001 "dma_device_id": "system", 00:37:24.001 "dma_device_type": 1 00:37:24.001 } 00:37:24.001 ], 00:37:24.001 "driver_specific": { 00:37:24.001 "nvme": [ 00:37:24.001 { 00:37:24.001 "trid": { 00:37:24.001 "trtype": "TCP", 00:37:24.001 "adrfam": "IPv4", 00:37:24.001 "traddr": "10.0.0.2", 00:37:24.001 "trsvcid": "4420", 00:37:24.001 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:24.001 }, 00:37:24.001 "ctrlr_data": { 00:37:24.001 "cntlid": 1, 00:37:24.001 "vendor_id": "0x8086", 00:37:24.001 "model_number": "SPDK bdev Controller", 00:37:24.001 "serial_number": "SPDK0", 00:37:24.001 "firmware_revision": "25.01", 00:37:24.001 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:24.001 "oacs": { 00:37:24.001 "security": 0, 00:37:24.001 "format": 0, 00:37:24.001 "firmware": 0, 00:37:24.001 "ns_manage": 0 00:37:24.001 }, 00:37:24.001 "multi_ctrlr": true, 00:37:24.001 "ana_reporting": false 00:37:24.001 }, 00:37:24.001 "vs": { 00:37:24.001 "nvme_version": "1.3" 00:37:24.001 }, 00:37:24.001 "ns_data": { 00:37:24.001 "id": 1, 00:37:24.001 "can_share": true 00:37:24.001 } 00:37:24.001 } 00:37:24.001 ], 00:37:24.001 "mp_policy": "active_passive" 00:37:24.001 } 00:37:24.001 } 00:37:24.001 ] 00:37:24.001 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=426923 00:37:24.001 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:24.001 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:24.261 Running I/O for 10 seconds... 00:37:25.198 Latency(us) 00:37:25.198 [2024-11-19T02:18:35.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.198 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:25.198 Nvme0n1 : 1.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:37:25.198 [2024-11-19T02:18:35.813Z] =================================================================================================================== 00:37:25.198 [2024-11-19T02:18:35.813Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:37:25.198 00:37:26.133 03:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0c694c4e-2e56-4d83-b66a-b9b835d0b97b 00:37:26.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:26.133 Nvme0n1 : 2.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:37:26.133 [2024-11-19T02:18:36.748Z] =================================================================================================================== 00:37:26.133 [2024-11-19T02:18:36.748Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:37:26.133 00:37:26.392 true 00:37:26.392 03:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c694c4e-2e56-4d83-b66a-b9b835d0b97b 00:37:26.392 03:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:26.650 03:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:26.650 03:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:26.650 03:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 426923 00:37:27.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:27.215 Nvme0n1 : 3.00 15070.67 58.87 0.00 0.00 0.00 0.00 0.00 00:37:27.215 [2024-11-19T02:18:37.830Z] =================================================================================================================== 00:37:27.215 [2024-11-19T02:18:37.830Z] Total : 15070.67 58.87 0.00 0.00 0.00 0.00 0.00 00:37:27.215 00:37:28.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:28.153 Nvme0n1 : 4.00 15160.75 59.22 0.00 0.00 0.00 0.00 0.00 00:37:28.153 [2024-11-19T02:18:38.768Z] =================================================================================================================== 00:37:28.153 [2024-11-19T02:18:38.768Z] Total : 15160.75 59.22 0.00 0.00 0.00 0.00 0.00 00:37:28.153 00:37:29.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:29.098 Nvme0n1 : 5.00 15196.00 59.36 0.00 0.00 0.00 0.00 0.00 00:37:29.098 [2024-11-19T02:18:39.713Z] =================================================================================================================== 00:37:29.098 [2024-11-19T02:18:39.713Z] Total : 15196.00 59.36 0.00 0.00 0.00 0.00 0.00 00:37:29.098 00:37:30.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:30.479 Nvme0n1 : 6.00 15118.67 59.06 0.00 0.00 0.00 0.00 0.00 00:37:30.479 [2024-11-19T02:18:41.094Z] =================================================================================================================== 00:37:30.479 [2024-11-19T02:18:41.094Z] Total : 15118.67 59.06 0.00 0.00 0.00 0.00 0.00 00:37:30.479 00:37:31.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:31.417 Nvme0n1 : 7.00 15172.29 59.27 0.00 0.00 0.00 0.00 0.00 00:37:31.417 [2024-11-19T02:18:42.032Z] =================================================================================================================== 00:37:31.417 [2024-11-19T02:18:42.032Z] Total : 15172.29 59.27 0.00 0.00 0.00 0.00 0.00 00:37:31.417 00:37:32.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:32.353 Nvme0n1 : 8.00 15228.50 59.49 0.00 0.00 0.00 0.00 0.00 00:37:32.353 [2024-11-19T02:18:42.968Z] =================================================================================================================== 00:37:32.353 [2024-11-19T02:18:42.968Z] Total : 15228.50 59.49 0.00 0.00 0.00 0.00 0.00 00:37:32.353 00:37:33.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:33.291 Nvme0n1 : 9.00 15272.11 59.66 0.00 0.00 0.00 0.00 0.00 00:37:33.291 [2024-11-19T02:18:43.906Z] =================================================================================================================== 00:37:33.291 [2024-11-19T02:18:43.906Z] Total : 15272.11 59.66 0.00 0.00 0.00 0.00 0.00 00:37:33.291 00:37:34.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:34.228 Nvme0n1 : 10.00 15313.40 59.82 0.00 0.00 0.00 0.00 0.00 00:37:34.228 [2024-11-19T02:18:44.843Z] =================================================================================================================== 00:37:34.228 [2024-11-19T02:18:44.843Z] Total : 15313.40 59.82 0.00 0.00 0.00 0.00 0.00 00:37:34.228 00:37:34.228 00:37:34.228 Latency(us) 00:37:34.228 [2024-11-19T02:18:44.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:34.228 Nvme0n1 : 10.00 15318.77 59.84 0.00 0.00 8350.41 3495.25 19515.16 00:37:34.228 [2024-11-19T02:18:44.843Z] =================================================================================================================== 00:37:34.228 [2024-11-19T02:18:44.843Z] Total : 15318.77 59.84 0.00 0.00 8350.41 3495.25 19515.16 00:37:34.228 { 00:37:34.228 "results": [ 00:37:34.228 { 00:37:34.228 "job": "Nvme0n1", 00:37:34.228 "core_mask": "0x2", 00:37:34.228 "workload": "randwrite", 00:37:34.228 "status": "finished", 00:37:34.228 "queue_depth": 128, 00:37:34.228 "io_size": 4096, 00:37:34.228 "runtime": 10.004847, 00:37:34.228 "iops": 15318.774989762462, 00:37:34.228 "mibps": 59.838964803759616, 00:37:34.228 "io_failed": 0, 00:37:34.228 "io_timeout": 0, 00:37:34.228 "avg_latency_us": 8350.4066908035, 00:37:34.228 "min_latency_us": 3495.2533333333336, 00:37:34.228 "max_latency_us": 19515.164444444443 00:37:34.228 } 00:37:34.228 ], 00:37:34.228 "core_count": 1 00:37:34.228 } 00:37:34.228 03:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 426841 00:37:34.228 03:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 426841 ']' 00:37:34.228 03:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 426841 00:37:34.228 03:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:37:34.228 03:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:34.228 03:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 426841 00:37:34.228 03:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:34.228 03:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:34.228 03:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 426841' 00:37:34.228 killing process with pid 426841 00:37:34.228 03:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 426841 00:37:34.228 Received shutdown signal, test time was about 10.000000 seconds 00:37:34.228 00:37:34.228 Latency(us) 00:37:34.228 [2024-11-19T02:18:44.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.228 [2024-11-19T02:18:44.843Z] =================================================================================================================== 00:37:34.228 [2024-11-19T02:18:44.843Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:34.228 03:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 426841 00:37:34.488 03:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:34.747 03:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:35.005 03:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c694c4e-2e56-4d83-b66a-b9b835d0b97b 00:37:35.005 03:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:35.264 03:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:35.264 03:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:37:35.264 03:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:35.524 [2024-11-19 03:18:45.975068] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:35.525 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c694c4e-2e56-4d83-b66a-b9b835d0b97b 00:37:35.525 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:37:35.525 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c694c4e-2e56-4d83-b66a-b9b835d0b97b 00:37:35.525 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:35.525 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:35.525 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:35.525 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:35.525 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:35.525 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:35.525 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:35.525 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:35.525 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c694c4e-2e56-4d83-b66a-b9b835d0b97b 00:37:35.785 request: 00:37:35.785 { 00:37:35.785 "uuid": "0c694c4e-2e56-4d83-b66a-b9b835d0b97b", 00:37:35.785 "method": "bdev_lvol_get_lvstores", 00:37:35.785 "req_id": 1 00:37:35.785 } 00:37:35.785 Got JSON-RPC error response 00:37:35.785 response: 00:37:35.785 { 00:37:35.785 "code": -19, 00:37:35.785 "message": "No such device" 00:37:35.785 } 00:37:35.785 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:37:35.785 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:35.785 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:35.785 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:35.785 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:36.046 aio_bdev 00:37:36.046 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0de472b6-68d1-4758-8516-d2782dd5dbde 00:37:36.046 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=0de472b6-68d1-4758-8516-d2782dd5dbde 00:37:36.046 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:36.046 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:37:36.046 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:36.046 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:36.046 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:36.304 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0de472b6-68d1-4758-8516-d2782dd5dbde -t 2000 00:37:36.563 [ 00:37:36.563 { 00:37:36.563 "name": "0de472b6-68d1-4758-8516-d2782dd5dbde", 00:37:36.563 "aliases": [ 00:37:36.563 "lvs/lvol" 00:37:36.563 ], 00:37:36.563 "product_name": "Logical Volume", 00:37:36.563 "block_size": 4096, 00:37:36.563 "num_blocks": 38912, 00:37:36.563 "uuid": "0de472b6-68d1-4758-8516-d2782dd5dbde", 00:37:36.563 "assigned_rate_limits": { 00:37:36.563 "rw_ios_per_sec": 0, 00:37:36.563 "rw_mbytes_per_sec": 0, 00:37:36.563 "r_mbytes_per_sec": 0, 00:37:36.563 "w_mbytes_per_sec": 0 00:37:36.563 }, 00:37:36.563 "claimed": false, 00:37:36.563 "zoned": false, 00:37:36.563 "supported_io_types": { 00:37:36.563 "read": true, 00:37:36.563 "write": true, 00:37:36.563 "unmap": true, 00:37:36.563 "flush": false, 00:37:36.563 "reset": true, 00:37:36.563 "nvme_admin": false, 00:37:36.563 "nvme_io": false, 00:37:36.563 "nvme_io_md": false, 00:37:36.563 "write_zeroes": true, 00:37:36.563 "zcopy": false, 00:37:36.563 "get_zone_info": false, 00:37:36.563 "zone_management": false, 00:37:36.563 "zone_append": false, 00:37:36.563 "compare": false, 00:37:36.563 "compare_and_write": false, 00:37:36.563 "abort": false, 00:37:36.563 "seek_hole": true, 00:37:36.563 "seek_data": true, 00:37:36.563 "copy": false, 00:37:36.563 "nvme_iov_md": false 00:37:36.563 }, 00:37:36.563 "driver_specific": { 00:37:36.563 "lvol": { 00:37:36.563 "lvol_store_uuid": "0c694c4e-2e56-4d83-b66a-b9b835d0b97b", 00:37:36.563 "base_bdev": "aio_bdev", 00:37:36.563 "thin_provision": false, 00:37:36.563 "num_allocated_clusters": 38, 00:37:36.563 "snapshot": false, 00:37:36.563 "clone": false, 00:37:36.563 "esnap_clone": false 00:37:36.563 } 00:37:36.563 } 00:37:36.563 } 00:37:36.563 ] 00:37:36.563 03:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:37:36.563 03:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c694c4e-2e56-4d83-b66a-b9b835d0b97b 00:37:36.563 03:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:36.823 03:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:36.823 03:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c694c4e-2e56-4d83-b66a-b9b835d0b97b 00:37:37.084 03:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:37.344 03:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:37.344 03:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0de472b6-68d1-4758-8516-d2782dd5dbde 00:37:37.604 03:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0c694c4e-2e56-4d83-b66a-b9b835d0b97b 00:37:37.863 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:38.122 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:38.122 00:37:38.122 real 0m17.868s 00:37:38.122 user 0m16.611s 00:37:38.122 sys 0m2.242s 00:37:38.122 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:38.122 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:38.122 ************************************ 00:37:38.122 END TEST lvs_grow_clean 00:37:38.122 ************************************ 00:37:38.122 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:37:38.122 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:38.122 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:38.122 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:38.122 ************************************ 00:37:38.122 START TEST lvs_grow_dirty 00:37:38.122 ************************************ 00:37:38.122 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:37:38.122 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:38.122 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:38.122 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:38.122 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:38.122 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:38.122 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:38.122 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:38.122 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:38.122 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:38.381 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:38.381 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:38.639 03:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b95ce9dc-6d8f-4f19-82ad-f41997f6e457 00:37:38.639 03:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95ce9dc-6d8f-4f19-82ad-f41997f6e457 00:37:38.639 03:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:38.897 03:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:38.897 03:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:38.897 03:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b95ce9dc-6d8f-4f19-82ad-f41997f6e457 lvol 150 00:37:39.179 03:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4f7d3ff2-ac6d-4999-bcdd-abe83a7f30f7 00:37:39.179 03:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:39.179 03:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:39.484 [2024-11-19 03:18:49.994989] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:39.484 [2024-11-19 03:18:49.995092] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:39.484 true 00:37:39.484 03:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95ce9dc-6d8f-4f19-82ad-f41997f6e457 00:37:39.484 03:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:39.768 03:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:39.768 03:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:40.026 03:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4f7d3ff2-ac6d-4999-bcdd-abe83a7f30f7 00:37:40.285 03:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:40.542 [2024-11-19 03:18:51.131268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:40.542 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:41.108 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=428883 00:37:41.108 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:41.108 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:41.108 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 428883 /var/tmp/bdevperf.sock 00:37:41.108 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 428883 ']' 00:37:41.108 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:41.108 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:41.108 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:41.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:41.108 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:41.108 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:41.108 [2024-11-19 03:18:51.468889] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:37:41.108 [2024-11-19 03:18:51.468970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428883 ] 00:37:41.108 [2024-11-19 03:18:51.536340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:41.108 [2024-11-19 03:18:51.584653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:41.108 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:41.108 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:37:41.108 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:41.675 Nvme0n1 00:37:41.675 03:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:41.936 [ 00:37:41.936 { 00:37:41.936 "name": "Nvme0n1", 00:37:41.936 "aliases": [ 00:37:41.936 "4f7d3ff2-ac6d-4999-bcdd-abe83a7f30f7" 00:37:41.936 ], 00:37:41.936 "product_name": "NVMe disk", 00:37:41.936 "block_size": 4096, 00:37:41.936 "num_blocks": 38912, 00:37:41.936 "uuid": "4f7d3ff2-ac6d-4999-bcdd-abe83a7f30f7", 00:37:41.936 "numa_id": 0, 00:37:41.936 "assigned_rate_limits": { 00:37:41.936 "rw_ios_per_sec": 0, 00:37:41.936 "rw_mbytes_per_sec": 0, 00:37:41.936 "r_mbytes_per_sec": 0, 00:37:41.936 "w_mbytes_per_sec": 0 00:37:41.936 }, 00:37:41.936 "claimed": false, 00:37:41.936 "zoned": false, 00:37:41.936 "supported_io_types": { 00:37:41.936 "read": true, 00:37:41.936 "write": true, 00:37:41.936 "unmap": true, 00:37:41.936 "flush": true, 00:37:41.936 "reset": true, 00:37:41.936 "nvme_admin": true, 00:37:41.936 "nvme_io": true, 00:37:41.936 "nvme_io_md": false, 00:37:41.936 "write_zeroes": true, 00:37:41.936 "zcopy": false, 00:37:41.936 "get_zone_info": false, 00:37:41.936 "zone_management": false, 00:37:41.936 "zone_append": false, 00:37:41.936 "compare": true, 00:37:41.936 "compare_and_write": true, 00:37:41.936 "abort": true, 00:37:41.936 "seek_hole": false, 00:37:41.936 "seek_data": false, 00:37:41.936 "copy": true, 00:37:41.936 "nvme_iov_md": false 00:37:41.936 }, 00:37:41.936 "memory_domains": [ 00:37:41.936 { 00:37:41.936 "dma_device_id": "system", 00:37:41.936 "dma_device_type": 1 00:37:41.936 } 00:37:41.936 ], 00:37:41.936 "driver_specific": { 00:37:41.936 "nvme": [ 00:37:41.936 { 00:37:41.936 "trid": { 00:37:41.936 "trtype": "TCP", 00:37:41.936 "adrfam": "IPv4", 00:37:41.936 "traddr": "10.0.0.2", 00:37:41.936 "trsvcid": "4420", 00:37:41.936 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:41.936 }, 00:37:41.936 "ctrlr_data": { 00:37:41.936 "cntlid": 1, 00:37:41.936 "vendor_id": "0x8086", 00:37:41.936 "model_number": "SPDK bdev Controller", 00:37:41.936 "serial_number": "SPDK0", 00:37:41.936 "firmware_revision": "25.01", 00:37:41.936 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:41.936 "oacs": { 00:37:41.936 "security": 0, 00:37:41.936 "format": 0, 00:37:41.936 "firmware": 0, 00:37:41.936 "ns_manage": 0 00:37:41.936 }, 00:37:41.936 "multi_ctrlr": true, 00:37:41.936 "ana_reporting": false 00:37:41.936 }, 00:37:41.936 "vs": { 00:37:41.936 "nvme_version": "1.3" 00:37:41.936 }, 00:37:41.936 "ns_data": { 00:37:41.936 "id": 1, 00:37:41.936 "can_share": true 00:37:41.936 } 00:37:41.936 } 00:37:41.936 ], 00:37:41.936 "mp_policy": "active_passive" 00:37:41.936 } 00:37:41.936 } 00:37:41.936 ] 00:37:41.936 03:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=429012 00:37:41.936 03:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:41.936 03:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:41.936 Running I/O for 10 seconds... 00:37:42.875 Latency(us) 00:37:42.875 [2024-11-19T02:18:53.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:42.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:42.875 Nvme0n1 : 1.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:37:42.875 [2024-11-19T02:18:53.490Z] =================================================================================================================== 00:37:42.875 [2024-11-19T02:18:53.490Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:37:42.875 00:37:43.811 03:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b95ce9dc-6d8f-4f19-82ad-f41997f6e457 00:37:43.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:43.811 Nvme0n1 : 2.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:37:43.811 [2024-11-19T02:18:54.426Z] =================================================================================================================== 00:37:43.811 [2024-11-19T02:18:54.426Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:37:43.811 00:37:44.070 true 00:37:44.070 03:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:44.070 03:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95ce9dc-6d8f-4f19-82ad-f41997f6e457 00:37:44.330 03:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:44.330 03:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:44.330 03:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 429012 00:37:44.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:44.900 Nvme0n1 : 3.00 15134.33 59.12 0.00 0.00 0.00 0.00 0.00 00:37:44.900 [2024-11-19T02:18:55.515Z] =================================================================================================================== 00:37:44.900 [2024-11-19T02:18:55.515Z] Total : 15134.33 59.12 0.00 0.00 0.00 0.00 0.00 00:37:44.900 00:37:45.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:45.834 Nvme0n1 : 4.00 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:37:45.834 [2024-11-19T02:18:56.449Z] =================================================================================================================== 00:37:45.834 [2024-11-19T02:18:56.449Z] Total : 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:37:45.834 00:37:47.214 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:47.214 Nvme0n1 : 5.00 15323.00 59.86 0.00 0.00 0.00 0.00 0.00 00:37:47.214 [2024-11-19T02:18:57.829Z] =================================================================================================================== 00:37:47.214 [2024-11-19T02:18:57.829Z] Total : 15323.00 59.86 0.00 0.00 0.00 0.00 0.00 00:37:47.214 00:37:48.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:48.149 Nvme0n1 : 6.00 15393.83 60.13 0.00 0.00 0.00 0.00 0.00 00:37:48.149 [2024-11-19T02:18:58.764Z] =================================================================================================================== 00:37:48.149 [2024-11-19T02:18:58.764Z] Total : 15393.83 60.13 0.00 0.00 0.00 0.00 0.00 00:37:48.149 00:37:49.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:49.108 Nvme0n1 : 7.00 15462.57 60.40 0.00 0.00 0.00 0.00 0.00 00:37:49.108 [2024-11-19T02:18:59.723Z] =================================================================================================================== 00:37:49.108 [2024-11-19T02:18:59.723Z] Total : 15462.57 60.40 0.00 0.00 0.00 0.00 0.00 00:37:49.108 00:37:50.042 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:50.042 Nvme0n1 : 8.00 15470.75 60.43 0.00 0.00 0.00 0.00 0.00 00:37:50.042 [2024-11-19T02:19:00.657Z] =================================================================================================================== 00:37:50.042 [2024-11-19T02:19:00.657Z] Total : 15470.75 60.43 0.00 0.00 0.00 0.00 0.00 00:37:50.042 00:37:50.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:50.980 Nvme0n1 : 9.00 15515.67 60.61 0.00 0.00 0.00 0.00 0.00 00:37:50.980 [2024-11-19T02:19:01.595Z] =================================================================================================================== 00:37:50.980 [2024-11-19T02:19:01.595Z] Total : 15515.67 60.61 0.00 0.00 0.00 0.00 0.00 00:37:50.980 00:37:51.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:51.918 Nvme0n1 : 10.00 15551.60 60.75 0.00 0.00 0.00 0.00 0.00 00:37:51.918 [2024-11-19T02:19:02.533Z] =================================================================================================================== 00:37:51.918 [2024-11-19T02:19:02.533Z] Total : 15551.60 60.75 0.00 0.00 0.00 0.00 0.00 00:37:51.918 00:37:51.918 00:37:51.918 Latency(us) 00:37:51.918 [2024-11-19T02:19:02.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:51.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:51.918 Nvme0n1 : 10.01 15554.01 60.76 0.00 0.00 8224.85 4296.25 17961.72 00:37:51.918 [2024-11-19T02:19:02.533Z] =================================================================================================================== 00:37:51.918 [2024-11-19T02:19:02.533Z] Total : 15554.01 60.76 0.00 0.00 8224.85 4296.25 17961.72 00:37:51.918 { 00:37:51.918 "results": [ 00:37:51.918 { 00:37:51.918 "job": "Nvme0n1", 00:37:51.918 "core_mask": "0x2", 00:37:51.918 "workload": "randwrite", 00:37:51.918 "status": "finished", 00:37:51.918 "queue_depth": 128, 00:37:51.918 "io_size": 4096, 00:37:51.918 "runtime": 10.006682, 00:37:51.918 "iops": 15554.006812647789, 00:37:51.918 "mibps": 60.757839111905426, 00:37:51.918 "io_failed": 0, 00:37:51.918 "io_timeout": 0, 00:37:51.918 "avg_latency_us": 8224.850278917605, 00:37:51.918 "min_latency_us": 4296.248888888889, 00:37:51.918 "max_latency_us": 17961.71851851852 00:37:51.918 } 00:37:51.918 ], 00:37:51.918 "core_count": 1 00:37:51.918 } 00:37:51.918 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 428883 00:37:51.918 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 428883 ']' 00:37:51.918 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 428883 00:37:51.918 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:37:51.918 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:51.918 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 428883 00:37:51.918 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:51.918 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:51.918 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 428883' 00:37:51.918 killing process with pid 428883 00:37:51.918 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 428883 00:37:51.918 Received shutdown signal, test time was about 10.000000 seconds 00:37:51.918 00:37:51.918 Latency(us) 00:37:51.918 [2024-11-19T02:19:02.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:51.918 [2024-11-19T02:19:02.533Z] =================================================================================================================== 00:37:51.918 [2024-11-19T02:19:02.533Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:51.918 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 428883 00:37:52.177 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:52.435 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:52.693 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95ce9dc-6d8f-4f19-82ad-f41997f6e457 00:37:52.693 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:52.953 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:52.954 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:37:52.954 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 426402 00:37:52.954 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 426402 00:37:52.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 426402 Killed "${NVMF_APP[@]}" "$@" 00:37:52.954 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:37:52.954 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:37:52.954 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:52.954 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:52.954 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:52.954 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=430341 00:37:52.954 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:52.954 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 430341 00:37:52.954 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 430341 ']' 00:37:52.954 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:52.954 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:52.954 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:52.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:52.954 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:52.954 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:53.213 [2024-11-19 03:19:03.612871] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:53.213 [2024-11-19 03:19:03.613976] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:37:53.213 [2024-11-19 03:19:03.614039] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:53.213 [2024-11-19 03:19:03.686189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.213 [2024-11-19 03:19:03.727676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:53.213 [2024-11-19 03:19:03.727746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:53.213 [2024-11-19 03:19:03.727774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:53.213 [2024-11-19 03:19:03.727785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:53.213 [2024-11-19 03:19:03.727794] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:53.213 [2024-11-19 03:19:03.728318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:53.213 [2024-11-19 03:19:03.809975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:53.213 [2024-11-19 03:19:03.810355] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:53.471 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:53.471 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:37:53.471 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:53.471 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:53.471 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:53.471 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:53.471 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:53.730 [2024-11-19 03:19:04.111109] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:37:53.730 [2024-11-19 03:19:04.111252] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:37:53.730 [2024-11-19 03:19:04.111301] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:37:53.730 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:37:53.730 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4f7d3ff2-ac6d-4999-bcdd-abe83a7f30f7 00:37:53.730 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=4f7d3ff2-ac6d-4999-bcdd-abe83a7f30f7 00:37:53.730 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:53.730 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:37:53.730 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:53.730 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:53.730 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:53.988 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4f7d3ff2-ac6d-4999-bcdd-abe83a7f30f7 -t 2000 00:37:54.246 [ 00:37:54.246 { 00:37:54.246 "name": "4f7d3ff2-ac6d-4999-bcdd-abe83a7f30f7", 00:37:54.246 "aliases": [ 00:37:54.246 "lvs/lvol" 00:37:54.246 ], 00:37:54.246 "product_name": "Logical Volume", 00:37:54.246 "block_size": 4096, 00:37:54.246 "num_blocks": 38912, 00:37:54.246 "uuid": "4f7d3ff2-ac6d-4999-bcdd-abe83a7f30f7", 00:37:54.246 "assigned_rate_limits": { 00:37:54.246 "rw_ios_per_sec": 0, 00:37:54.246 "rw_mbytes_per_sec": 0, 00:37:54.246 "r_mbytes_per_sec": 0, 00:37:54.246 "w_mbytes_per_sec": 0 00:37:54.246 }, 00:37:54.246 "claimed": false, 00:37:54.246 "zoned": false, 00:37:54.246 "supported_io_types": { 00:37:54.246 "read": true, 00:37:54.246 "write": true, 00:37:54.246 "unmap": true, 00:37:54.246 "flush": false, 00:37:54.246 "reset": true, 00:37:54.246 "nvme_admin": false, 00:37:54.246 "nvme_io": false, 00:37:54.246 "nvme_io_md": false, 00:37:54.246 "write_zeroes": true, 00:37:54.246 "zcopy": false, 00:37:54.246 "get_zone_info": false, 00:37:54.246 "zone_management": false, 00:37:54.246 "zone_append": false, 00:37:54.246 "compare": false, 00:37:54.246 "compare_and_write": false, 00:37:54.246 "abort": false, 00:37:54.246 "seek_hole": true, 00:37:54.246 "seek_data": true, 00:37:54.246 "copy": false, 00:37:54.246 "nvme_iov_md": false 00:37:54.246 }, 00:37:54.246 "driver_specific": { 00:37:54.246 "lvol": { 00:37:54.246 "lvol_store_uuid": "b95ce9dc-6d8f-4f19-82ad-f41997f6e457", 00:37:54.246 "base_bdev": "aio_bdev", 00:37:54.246 "thin_provision": false, 00:37:54.246 "num_allocated_clusters": 38, 00:37:54.246 "snapshot": false, 00:37:54.246 "clone": false, 00:37:54.246 "esnap_clone": false 00:37:54.246 } 00:37:54.246 } 00:37:54.246 } 00:37:54.246 ] 00:37:54.246 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:37:54.246 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95ce9dc-6d8f-4f19-82ad-f41997f6e457 00:37:54.246 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:37:54.505 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:37:54.505 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95ce9dc-6d8f-4f19-82ad-f41997f6e457 00:37:54.505 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:37:54.763 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:37:54.763 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:55.021 [2024-11-19 03:19:05.508818] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:55.021 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95ce9dc-6d8f-4f19-82ad-f41997f6e457 00:37:55.021 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:37:55.021 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95ce9dc-6d8f-4f19-82ad-f41997f6e457 00:37:55.021 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:55.021 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:55.021 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:55.021 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:55.021 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:55.021 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:55.021 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:55.021 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:55.021 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95ce9dc-6d8f-4f19-82ad-f41997f6e457 00:37:55.279 request: 00:37:55.279 { 00:37:55.279 "uuid": "b95ce9dc-6d8f-4f19-82ad-f41997f6e457", 00:37:55.279 "method": "bdev_lvol_get_lvstores", 00:37:55.279 "req_id": 1 00:37:55.279 } 00:37:55.279 Got JSON-RPC error response 00:37:55.279 response: 00:37:55.279 { 00:37:55.279 "code": -19, 00:37:55.279 "message": "No such device" 00:37:55.279 } 00:37:55.279 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:37:55.279 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:55.279 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:55.279 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:55.279 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:55.539 aio_bdev 00:37:55.539 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4f7d3ff2-ac6d-4999-bcdd-abe83a7f30f7 00:37:55.539 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=4f7d3ff2-ac6d-4999-bcdd-abe83a7f30f7 00:37:55.539 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:55.539 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:37:55.539 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:55.539 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:55.539 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:55.799 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4f7d3ff2-ac6d-4999-bcdd-abe83a7f30f7 -t 2000 00:37:56.060 [ 00:37:56.060 { 00:37:56.060 "name": "4f7d3ff2-ac6d-4999-bcdd-abe83a7f30f7", 00:37:56.060 "aliases": [ 00:37:56.060 "lvs/lvol" 00:37:56.060 ], 00:37:56.060 "product_name": "Logical Volume", 00:37:56.060 "block_size": 4096, 00:37:56.060 "num_blocks": 38912, 00:37:56.060 "uuid": "4f7d3ff2-ac6d-4999-bcdd-abe83a7f30f7", 00:37:56.060 "assigned_rate_limits": { 00:37:56.060 "rw_ios_per_sec": 0, 00:37:56.060 "rw_mbytes_per_sec": 0, 00:37:56.060 "r_mbytes_per_sec": 0, 00:37:56.060 "w_mbytes_per_sec": 0 00:37:56.060 }, 00:37:56.060 "claimed": false, 00:37:56.060 "zoned": false, 00:37:56.060 "supported_io_types": { 00:37:56.060 "read": true, 00:37:56.060 "write": true, 00:37:56.060 "unmap": true, 00:37:56.060 "flush": false, 00:37:56.060 "reset": true, 00:37:56.060 "nvme_admin": false, 00:37:56.060 "nvme_io": false, 00:37:56.060 "nvme_io_md": false, 00:37:56.060 "write_zeroes": true, 00:37:56.060 "zcopy": false, 00:37:56.060 "get_zone_info": false, 00:37:56.060 "zone_management": false, 00:37:56.060 "zone_append": false, 00:37:56.060 "compare": false, 00:37:56.060 "compare_and_write": false, 00:37:56.060 "abort": false, 00:37:56.060 "seek_hole": true, 00:37:56.060 "seek_data": true, 00:37:56.060 "copy": false, 00:37:56.060 "nvme_iov_md": false 00:37:56.060 }, 00:37:56.060 "driver_specific": { 00:37:56.060 "lvol": { 00:37:56.060 "lvol_store_uuid": "b95ce9dc-6d8f-4f19-82ad-f41997f6e457", 00:37:56.060 "base_bdev": "aio_bdev", 00:37:56.060 "thin_provision": false, 00:37:56.060 "num_allocated_clusters": 38, 00:37:56.060 "snapshot": false, 00:37:56.060 "clone": false, 00:37:56.060 "esnap_clone": false 00:37:56.060 } 00:37:56.060 } 00:37:56.060 } 00:37:56.060 ] 00:37:56.060 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:37:56.060 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95ce9dc-6d8f-4f19-82ad-f41997f6e457 00:37:56.060 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:56.318 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:56.318 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95ce9dc-6d8f-4f19-82ad-f41997f6e457 00:37:56.318 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:56.885 03:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:56.885 03:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4f7d3ff2-ac6d-4999-bcdd-abe83a7f30f7 00:37:56.885 03:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b95ce9dc-6d8f-4f19-82ad-f41997f6e457 00:37:57.455 03:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:57.455 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:57.715 00:37:57.715 real 0m19.442s 00:37:57.715 user 0m36.461s 00:37:57.715 sys 0m4.653s 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:57.715 ************************************ 00:37:57.715 END TEST lvs_grow_dirty 00:37:57.715 ************************************ 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:37:57.715 nvmf_trace.0 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:57.715 rmmod nvme_tcp 00:37:57.715 rmmod nvme_fabrics 00:37:57.715 rmmod nvme_keyring 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 430341 ']' 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 430341 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 430341 ']' 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 430341 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 430341 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 430341' 00:37:57.715 killing process with pid 430341 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 430341 00:37:57.715 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 430341 00:37:57.975 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:57.976 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:57.976 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:57.976 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:37:57.976 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:37:57.976 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:57.976 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:37:57.976 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:57.976 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:57.976 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:57.976 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:57.976 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:59.882 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:59.882 00:37:59.882 real 0m42.874s 00:37:59.882 user 0m54.841s 00:37:59.882 sys 0m8.949s 00:37:59.882 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:59.882 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:59.882 ************************************ 00:37:59.882 END TEST nvmf_lvs_grow 00:37:59.882 ************************************ 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:00.142 ************************************ 00:38:00.142 START TEST nvmf_bdev_io_wait 00:38:00.142 ************************************ 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:00.142 * Looking for test storage... 00:38:00.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:00.142 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:00.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.142 --rc genhtml_branch_coverage=1 00:38:00.142 --rc genhtml_function_coverage=1 00:38:00.142 --rc genhtml_legend=1 00:38:00.142 --rc geninfo_all_blocks=1 00:38:00.143 --rc geninfo_unexecuted_blocks=1 00:38:00.143 00:38:00.143 ' 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:00.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.143 --rc genhtml_branch_coverage=1 00:38:00.143 --rc genhtml_function_coverage=1 00:38:00.143 --rc genhtml_legend=1 00:38:00.143 --rc geninfo_all_blocks=1 00:38:00.143 --rc geninfo_unexecuted_blocks=1 00:38:00.143 00:38:00.143 ' 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:00.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.143 --rc genhtml_branch_coverage=1 00:38:00.143 --rc genhtml_function_coverage=1 00:38:00.143 --rc genhtml_legend=1 00:38:00.143 --rc geninfo_all_blocks=1 00:38:00.143 --rc geninfo_unexecuted_blocks=1 00:38:00.143 00:38:00.143 ' 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:00.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.143 --rc genhtml_branch_coverage=1 00:38:00.143 --rc genhtml_function_coverage=1 00:38:00.143 --rc genhtml_legend=1 00:38:00.143 --rc geninfo_all_blocks=1 00:38:00.143 --rc geninfo_unexecuted_blocks=1 00:38:00.143 00:38:00.143 ' 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:00.143 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:02.676 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:02.676 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:02.676 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:02.676 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:02.677 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:02.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:02.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:38:02.677 00:38:02.677 --- 10.0.0.2 ping statistics --- 00:38:02.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:02.677 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:02.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:02.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:38:02.677 00:38:02.677 --- 10.0.0.1 ping statistics --- 00:38:02.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:02.677 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=432858 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 432858 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 432858 ']' 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:02.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:02.677 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:02.677 [2024-11-19 03:19:12.939537] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:02.677 [2024-11-19 03:19:12.940687] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:02.677 [2024-11-19 03:19:12.940781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:02.677 [2024-11-19 03:19:13.019333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:02.677 [2024-11-19 03:19:13.067764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:02.677 [2024-11-19 03:19:13.067827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:02.677 [2024-11-19 03:19:13.067841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:02.677 [2024-11-19 03:19:13.067853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:02.677 [2024-11-19 03:19:13.067863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:02.677 [2024-11-19 03:19:13.069510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:02.677 [2024-11-19 03:19:13.069569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:02.677 [2024-11-19 03:19:13.069637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:02.677 [2024-11-19 03:19:13.069640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:02.677 [2024-11-19 03:19:13.070214] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:02.677 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:02.677 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:38:02.677 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:02.677 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:02.677 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:02.677 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:02.677 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:02.677 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.677 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:02.677 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.677 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:02.677 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.677 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:02.677 [2024-11-19 03:19:13.271525] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:02.677 [2024-11-19 03:19:13.271735] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:02.678 [2024-11-19 03:19:13.272601] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:02.678 [2024-11-19 03:19:13.273477] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:02.678 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.678 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:02.678 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.678 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:02.678 [2024-11-19 03:19:13.278403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:02.678 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.678 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:02.678 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.678 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:02.937 Malloc0 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:02.937 [2024-11-19 03:19:13.334546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=432963 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=432966 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=432968 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:02.937 { 00:38:02.937 "params": { 00:38:02.937 "name": "Nvme$subsystem", 00:38:02.937 "trtype": "$TEST_TRANSPORT", 00:38:02.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:02.937 "adrfam": "ipv4", 00:38:02.937 "trsvcid": "$NVMF_PORT", 00:38:02.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:02.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:02.937 "hdgst": ${hdgst:-false}, 00:38:02.937 "ddgst": ${ddgst:-false} 00:38:02.937 }, 00:38:02.937 "method": "bdev_nvme_attach_controller" 00:38:02.937 } 00:38:02.937 EOF 00:38:02.937 )") 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=432971 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:02.937 { 00:38:02.937 "params": { 00:38:02.937 "name": "Nvme$subsystem", 00:38:02.937 "trtype": "$TEST_TRANSPORT", 00:38:02.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:02.937 "adrfam": "ipv4", 00:38:02.937 "trsvcid": "$NVMF_PORT", 00:38:02.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:02.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:02.937 "hdgst": ${hdgst:-false}, 00:38:02.937 "ddgst": ${ddgst:-false} 00:38:02.937 }, 00:38:02.937 "method": "bdev_nvme_attach_controller" 00:38:02.937 } 00:38:02.937 EOF 00:38:02.937 )") 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:02.937 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:02.937 { 00:38:02.937 "params": { 00:38:02.937 "name": "Nvme$subsystem", 00:38:02.937 "trtype": "$TEST_TRANSPORT", 00:38:02.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:02.937 "adrfam": "ipv4", 00:38:02.937 "trsvcid": "$NVMF_PORT", 00:38:02.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:02.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:02.937 "hdgst": ${hdgst:-false}, 00:38:02.937 "ddgst": ${ddgst:-false} 00:38:02.937 }, 00:38:02.937 "method": "bdev_nvme_attach_controller" 00:38:02.937 } 00:38:02.937 EOF 00:38:02.937 )") 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:02.938 { 00:38:02.938 "params": { 00:38:02.938 "name": "Nvme$subsystem", 00:38:02.938 "trtype": "$TEST_TRANSPORT", 00:38:02.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:02.938 "adrfam": "ipv4", 00:38:02.938 "trsvcid": "$NVMF_PORT", 00:38:02.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:02.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:02.938 "hdgst": ${hdgst:-false}, 00:38:02.938 "ddgst": ${ddgst:-false} 00:38:02.938 }, 00:38:02.938 "method": "bdev_nvme_attach_controller" 00:38:02.938 } 00:38:02.938 EOF 00:38:02.938 )") 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 432963 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:02.938 "params": { 00:38:02.938 "name": "Nvme1", 00:38:02.938 "trtype": "tcp", 00:38:02.938 "traddr": "10.0.0.2", 00:38:02.938 "adrfam": "ipv4", 00:38:02.938 "trsvcid": "4420", 00:38:02.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:02.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:02.938 "hdgst": false, 00:38:02.938 "ddgst": false 00:38:02.938 }, 00:38:02.938 "method": "bdev_nvme_attach_controller" 00:38:02.938 }' 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:02.938 "params": { 00:38:02.938 "name": "Nvme1", 00:38:02.938 "trtype": "tcp", 00:38:02.938 "traddr": "10.0.0.2", 00:38:02.938 "adrfam": "ipv4", 00:38:02.938 "trsvcid": "4420", 00:38:02.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:02.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:02.938 "hdgst": false, 00:38:02.938 "ddgst": false 00:38:02.938 }, 00:38:02.938 "method": "bdev_nvme_attach_controller" 00:38:02.938 }' 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:02.938 "params": { 00:38:02.938 "name": "Nvme1", 00:38:02.938 "trtype": "tcp", 00:38:02.938 "traddr": "10.0.0.2", 00:38:02.938 "adrfam": "ipv4", 00:38:02.938 "trsvcid": "4420", 00:38:02.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:02.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:02.938 "hdgst": false, 00:38:02.938 "ddgst": false 00:38:02.938 }, 00:38:02.938 "method": "bdev_nvme_attach_controller" 00:38:02.938 }' 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:02.938 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:02.938 "params": { 00:38:02.938 "name": "Nvme1", 00:38:02.938 "trtype": "tcp", 00:38:02.938 "traddr": "10.0.0.2", 00:38:02.938 "adrfam": "ipv4", 00:38:02.938 "trsvcid": "4420", 00:38:02.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:02.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:02.938 "hdgst": false, 00:38:02.938 "ddgst": false 00:38:02.938 }, 00:38:02.938 "method": "bdev_nvme_attach_controller" 00:38:02.938 }' 00:38:02.938 [2024-11-19 03:19:13.385036] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:02.938 [2024-11-19 03:19:13.385036] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:02.938 [2024-11-19 03:19:13.385123] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 03:19:13.385123] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:02.938 --proc-type=auto ] 00:38:02.938 [2024-11-19 03:19:13.385343] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:02.938 [2024-11-19 03:19:13.385343] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:02.938 [2024-11-19 03:19:13.385430] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 03:19:13.385430] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:38:02.938 --proc-type=auto ] 00:38:03.196 [2024-11-19 03:19:13.569552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.196 [2024-11-19 03:19:13.612928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:03.196 [2024-11-19 03:19:13.669665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.196 [2024-11-19 03:19:13.708366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:03.196 [2024-11-19 03:19:13.734774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.196 [2024-11-19 03:19:13.772117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:03.196 [2024-11-19 03:19:13.799947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.454 [2024-11-19 03:19:13.838032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:03.454 Running I/O for 1 seconds... 00:38:03.454 Running I/O for 1 seconds... 00:38:03.454 Running I/O for 1 seconds... 00:38:03.454 Running I/O for 1 seconds... 00:38:04.390 5923.00 IOPS, 23.14 MiB/s [2024-11-19T02:19:15.005Z] 10232.00 IOPS, 39.97 MiB/s [2024-11-19T02:19:15.005Z] 190960.00 IOPS, 745.94 MiB/s 00:38:04.390 Latency(us) 00:38:04.390 [2024-11-19T02:19:15.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:04.390 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:04.390 Nvme1n1 : 1.00 190608.51 744.56 0.00 0.00 667.96 292.79 1844.72 00:38:04.390 [2024-11-19T02:19:15.005Z] =================================================================================================================== 00:38:04.390 [2024-11-19T02:19:15.005Z] Total : 190608.51 744.56 0.00 0.00 667.96 292.79 1844.72 00:38:04.390 00:38:04.390 Latency(us) 00:38:04.390 [2024-11-19T02:19:15.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:04.390 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:04.390 Nvme1n1 : 1.01 10302.24 40.24 0.00 0.00 12380.61 4636.07 19612.25 00:38:04.390 [2024-11-19T02:19:15.005Z] =================================================================================================================== 00:38:04.390 [2024-11-19T02:19:15.005Z] Total : 10302.24 40.24 0.00 0.00 12380.61 4636.07 19612.25 00:38:04.390 00:38:04.390 Latency(us) 00:38:04.390 [2024-11-19T02:19:15.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:04.390 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:04.390 Nvme1n1 : 1.02 5934.69 23.18 0.00 0.00 21379.01 4223.43 34564.17 00:38:04.390 [2024-11-19T02:19:15.005Z] =================================================================================================================== 00:38:04.390 [2024-11-19T02:19:15.005Z] Total : 5934.69 23.18 0.00 0.00 21379.01 4223.43 34564.17 00:38:04.649 5967.00 IOPS, 23.31 MiB/s 00:38:04.649 Latency(us) 00:38:04.649 [2024-11-19T02:19:15.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:04.650 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:04.650 Nvme1n1 : 1.01 6087.22 23.78 0.00 0.00 20961.08 4733.16 40389.59 00:38:04.650 [2024-11-19T02:19:15.265Z] =================================================================================================================== 00:38:04.650 [2024-11-19T02:19:15.265Z] Total : 6087.22 23.78 0.00 0.00 20961.08 4733.16 40389.59 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 432966 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 432968 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 432971 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:04.650 rmmod nvme_tcp 00:38:04.650 rmmod nvme_fabrics 00:38:04.650 rmmod nvme_keyring 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 432858 ']' 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 432858 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 432858 ']' 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 432858 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:04.650 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 432858 00:38:04.909 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:04.909 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:04.909 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 432858' 00:38:04.909 killing process with pid 432858 00:38:04.909 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 432858 00:38:04.909 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 432858 00:38:04.909 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:04.909 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:04.909 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:04.909 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:04.909 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:38:04.909 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:04.909 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:38:04.909 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:04.909 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:04.909 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:04.909 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:04.909 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:07.445 00:38:07.445 real 0m6.992s 00:38:07.445 user 0m13.565s 00:38:07.445 sys 0m3.799s 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:07.445 ************************************ 00:38:07.445 END TEST nvmf_bdev_io_wait 00:38:07.445 ************************************ 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:07.445 ************************************ 00:38:07.445 START TEST nvmf_queue_depth 00:38:07.445 ************************************ 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:07.445 * Looking for test storage... 00:38:07.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:07.445 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:07.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:07.446 --rc genhtml_branch_coverage=1 00:38:07.446 --rc genhtml_function_coverage=1 00:38:07.446 --rc genhtml_legend=1 00:38:07.446 --rc geninfo_all_blocks=1 00:38:07.446 --rc geninfo_unexecuted_blocks=1 00:38:07.446 00:38:07.446 ' 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:07.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:07.446 --rc genhtml_branch_coverage=1 00:38:07.446 --rc genhtml_function_coverage=1 00:38:07.446 --rc genhtml_legend=1 00:38:07.446 --rc geninfo_all_blocks=1 00:38:07.446 --rc geninfo_unexecuted_blocks=1 00:38:07.446 00:38:07.446 ' 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:07.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:07.446 --rc genhtml_branch_coverage=1 00:38:07.446 --rc genhtml_function_coverage=1 00:38:07.446 --rc genhtml_legend=1 00:38:07.446 --rc geninfo_all_blocks=1 00:38:07.446 --rc geninfo_unexecuted_blocks=1 00:38:07.446 00:38:07.446 ' 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:07.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:07.446 --rc genhtml_branch_coverage=1 00:38:07.446 --rc genhtml_function_coverage=1 00:38:07.446 --rc genhtml_legend=1 00:38:07.446 --rc geninfo_all_blocks=1 00:38:07.446 --rc geninfo_unexecuted_blocks=1 00:38:07.446 00:38:07.446 ' 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:07.446 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:07.447 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:07.447 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:07.447 03:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:09.349 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:09.350 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:09.350 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:09.350 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:09.350 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:09.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:09.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:38:09.350 00:38:09.350 --- 10.0.0.2 ping statistics --- 00:38:09.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.350 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:09.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:09.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:38:09.350 00:38:09.350 --- 10.0.0.1 ping statistics --- 00:38:09.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.350 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:09.350 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:09.609 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:09.610 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:09.610 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:09.610 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.610 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=435111 00:38:09.610 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:09.610 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 435111 00:38:09.610 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 435111 ']' 00:38:09.610 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:09.610 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:09.610 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:09.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:09.610 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:09.610 03:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.610 [2024-11-19 03:19:20.040858] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:09.610 [2024-11-19 03:19:20.042052] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:09.610 [2024-11-19 03:19:20.042113] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:09.610 [2024-11-19 03:19:20.122828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.610 [2024-11-19 03:19:20.171582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:09.610 [2024-11-19 03:19:20.171683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:09.610 [2024-11-19 03:19:20.171709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:09.610 [2024-11-19 03:19:20.171721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:09.610 [2024-11-19 03:19:20.171745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:09.610 [2024-11-19 03:19:20.172346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:09.869 [2024-11-19 03:19:20.267531] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:09.870 [2024-11-19 03:19:20.267893] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.870 [2024-11-19 03:19:20.320927] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.870 Malloc0 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.870 [2024-11-19 03:19:20.385125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=435252 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 435252 /var/tmp/bdevperf.sock 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 435252 ']' 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:09.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:09.870 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.870 [2024-11-19 03:19:20.440128] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:09.870 [2024-11-19 03:19:20.440208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435252 ] 00:38:10.128 [2024-11-19 03:19:20.516657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.129 [2024-11-19 03:19:20.568615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.129 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:10.129 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:10.129 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:10.129 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.129 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:10.388 NVMe0n1 00:38:10.388 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.388 03:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:10.388 Running I/O for 10 seconds... 00:38:12.705 8192.00 IOPS, 32.00 MiB/s [2024-11-19T02:19:24.257Z] 8385.50 IOPS, 32.76 MiB/s [2024-11-19T02:19:25.195Z] 8530.67 IOPS, 33.32 MiB/s [2024-11-19T02:19:26.135Z] 8553.00 IOPS, 33.41 MiB/s [2024-11-19T02:19:27.076Z] 8603.80 IOPS, 33.61 MiB/s [2024-11-19T02:19:28.014Z] 8605.83 IOPS, 33.62 MiB/s [2024-11-19T02:19:28.953Z] 8632.00 IOPS, 33.72 MiB/s [2024-11-19T02:19:30.333Z] 8658.25 IOPS, 33.82 MiB/s [2024-11-19T02:19:31.269Z] 8653.89 IOPS, 33.80 MiB/s [2024-11-19T02:19:31.269Z] 8695.10 IOPS, 33.97 MiB/s 00:38:20.654 Latency(us) 00:38:20.654 [2024-11-19T02:19:31.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:20.654 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:20.654 Verification LBA range: start 0x0 length 0x4000 00:38:20.654 NVMe0n1 : 10.10 8705.63 34.01 0.00 0.00 117126.67 21554.06 69905.07 00:38:20.654 [2024-11-19T02:19:31.269Z] =================================================================================================================== 00:38:20.654 [2024-11-19T02:19:31.269Z] Total : 8705.63 34.01 0.00 0.00 117126.67 21554.06 69905.07 00:38:20.654 { 00:38:20.654 "results": [ 00:38:20.654 { 00:38:20.654 "job": "NVMe0n1", 00:38:20.654 "core_mask": "0x1", 00:38:20.654 "workload": "verify", 00:38:20.654 "status": "finished", 00:38:20.654 "verify_range": { 00:38:20.654 "start": 0, 00:38:20.654 "length": 16384 00:38:20.654 }, 00:38:20.654 "queue_depth": 1024, 00:38:20.654 "io_size": 4096, 00:38:20.654 "runtime": 10.097369, 00:38:20.654 "iops": 8705.634111222438, 00:38:20.654 "mibps": 34.00638324696265, 00:38:20.654 "io_failed": 0, 00:38:20.654 "io_timeout": 0, 00:38:20.654 "avg_latency_us": 117126.67265730965, 00:38:20.654 "min_latency_us": 21554.062222222223, 00:38:20.654 "max_latency_us": 69905.06666666667 00:38:20.654 } 00:38:20.654 ], 00:38:20.654 "core_count": 1 00:38:20.654 } 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 435252 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 435252 ']' 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 435252 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 435252 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 435252' 00:38:20.654 killing process with pid 435252 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 435252 00:38:20.654 Received shutdown signal, test time was about 10.000000 seconds 00:38:20.654 00:38:20.654 Latency(us) 00:38:20.654 [2024-11-19T02:19:31.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:20.654 [2024-11-19T02:19:31.269Z] =================================================================================================================== 00:38:20.654 [2024-11-19T02:19:31.269Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 435252 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:20.654 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:20.654 rmmod nvme_tcp 00:38:20.915 rmmod nvme_fabrics 00:38:20.915 rmmod nvme_keyring 00:38:20.915 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:20.915 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:20.915 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:20.915 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 435111 ']' 00:38:20.915 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 435111 00:38:20.915 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 435111 ']' 00:38:20.915 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 435111 00:38:20.915 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:20.915 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:20.915 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 435111 00:38:20.915 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:20.915 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:20.915 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 435111' 00:38:20.915 killing process with pid 435111 00:38:20.915 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 435111 00:38:20.915 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 435111 00:38:21.195 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:21.195 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:21.195 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:21.195 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:21.195 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:38:21.195 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:21.195 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:38:21.195 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:21.195 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:21.195 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:21.195 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:21.195 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:23.109 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:23.109 00:38:23.109 real 0m16.022s 00:38:23.109 user 0m22.113s 00:38:23.109 sys 0m3.411s 00:38:23.109 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:23.109 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:23.109 ************************************ 00:38:23.109 END TEST nvmf_queue_depth 00:38:23.109 ************************************ 00:38:23.109 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:23.109 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:23.109 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:23.109 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:23.109 ************************************ 00:38:23.109 START TEST nvmf_target_multipath 00:38:23.109 ************************************ 00:38:23.109 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:23.109 * Looking for test storage... 00:38:23.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:23.109 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:23.109 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:38:23.109 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:23.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.369 --rc genhtml_branch_coverage=1 00:38:23.369 --rc genhtml_function_coverage=1 00:38:23.369 --rc genhtml_legend=1 00:38:23.369 --rc geninfo_all_blocks=1 00:38:23.369 --rc geninfo_unexecuted_blocks=1 00:38:23.369 00:38:23.369 ' 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:23.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.369 --rc genhtml_branch_coverage=1 00:38:23.369 --rc genhtml_function_coverage=1 00:38:23.369 --rc genhtml_legend=1 00:38:23.369 --rc geninfo_all_blocks=1 00:38:23.369 --rc geninfo_unexecuted_blocks=1 00:38:23.369 00:38:23.369 ' 00:38:23.369 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:23.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.369 --rc genhtml_branch_coverage=1 00:38:23.369 --rc genhtml_function_coverage=1 00:38:23.369 --rc genhtml_legend=1 00:38:23.369 --rc geninfo_all_blocks=1 00:38:23.369 --rc geninfo_unexecuted_blocks=1 00:38:23.369 00:38:23.370 ' 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:23.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.370 --rc genhtml_branch_coverage=1 00:38:23.370 --rc genhtml_function_coverage=1 00:38:23.370 --rc genhtml_legend=1 00:38:23.370 --rc geninfo_all_blocks=1 00:38:23.370 --rc geninfo_unexecuted_blocks=1 00:38:23.370 00:38:23.370 ' 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:23.370 03:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:25.272 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:25.272 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:25.272 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:25.273 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:25.273 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:25.273 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:25.273 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:25.273 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:25.273 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:25.273 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:25.273 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:25.532 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:25.532 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:25.532 03:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:25.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:25.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:38:25.532 00:38:25.532 --- 10.0.0.2 ping statistics --- 00:38:25.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.532 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:25.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:25.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:38:25.532 00:38:25.532 --- 10.0.0.1 ping statistics --- 00:38:25.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.532 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:25.532 only one NIC for nvmf test 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:25.532 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:25.532 rmmod nvme_tcp 00:38:25.532 rmmod nvme_fabrics 00:38:25.532 rmmod nvme_keyring 00:38:25.533 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:25.533 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:25.533 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:25.533 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:25.533 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:25.533 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:25.533 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:25.533 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:25.533 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:25.533 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:25.533 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:25.533 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:25.533 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:25.533 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.533 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:25.533 03:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:28.074 00:38:28.074 real 0m4.500s 00:38:28.074 user 0m0.938s 00:38:28.074 sys 0m1.561s 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:28.074 ************************************ 00:38:28.074 END TEST nvmf_target_multipath 00:38:28.074 ************************************ 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:28.074 ************************************ 00:38:28.074 START TEST nvmf_zcopy 00:38:28.074 ************************************ 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:28.074 * Looking for test storage... 00:38:28.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:28.074 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:28.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.075 --rc genhtml_branch_coverage=1 00:38:28.075 --rc genhtml_function_coverage=1 00:38:28.075 --rc genhtml_legend=1 00:38:28.075 --rc geninfo_all_blocks=1 00:38:28.075 --rc geninfo_unexecuted_blocks=1 00:38:28.075 00:38:28.075 ' 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:28.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.075 --rc genhtml_branch_coverage=1 00:38:28.075 --rc genhtml_function_coverage=1 00:38:28.075 --rc genhtml_legend=1 00:38:28.075 --rc geninfo_all_blocks=1 00:38:28.075 --rc geninfo_unexecuted_blocks=1 00:38:28.075 00:38:28.075 ' 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:28.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.075 --rc genhtml_branch_coverage=1 00:38:28.075 --rc genhtml_function_coverage=1 00:38:28.075 --rc genhtml_legend=1 00:38:28.075 --rc geninfo_all_blocks=1 00:38:28.075 --rc geninfo_unexecuted_blocks=1 00:38:28.075 00:38:28.075 ' 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:28.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.075 --rc genhtml_branch_coverage=1 00:38:28.075 --rc genhtml_function_coverage=1 00:38:28.075 --rc genhtml_legend=1 00:38:28.075 --rc geninfo_all_blocks=1 00:38:28.075 --rc geninfo_unexecuted_blocks=1 00:38:28.075 00:38:28.075 ' 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:38:28.075 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:29.979 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:29.979 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:29.979 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:29.979 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:29.979 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:29.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:29.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:38:29.980 00:38:29.980 --- 10.0.0.2 ping statistics --- 00:38:29.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:29.980 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:29.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:29.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:38:29.980 00:38:29.980 --- 10.0.0.1 ping statistics --- 00:38:29.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:29.980 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=440300 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 440300 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 440300 ']' 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:29.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:29.980 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:29.980 [2024-11-19 03:19:40.509702] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:29.980 [2024-11-19 03:19:40.510794] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:29.980 [2024-11-19 03:19:40.510863] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:29.980 [2024-11-19 03:19:40.588161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.239 [2024-11-19 03:19:40.635340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:30.239 [2024-11-19 03:19:40.635417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:30.239 [2024-11-19 03:19:40.635431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:30.239 [2024-11-19 03:19:40.635443] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:30.239 [2024-11-19 03:19:40.635453] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:30.239 [2024-11-19 03:19:40.636116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:30.239 [2024-11-19 03:19:40.730001] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:30.239 [2024-11-19 03:19:40.730320] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:30.239 [2024-11-19 03:19:40.788755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:30.239 [2024-11-19 03:19:40.804901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:30.239 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:30.240 malloc0 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:30.240 { 00:38:30.240 "params": { 00:38:30.240 "name": "Nvme$subsystem", 00:38:30.240 "trtype": "$TEST_TRANSPORT", 00:38:30.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:30.240 "adrfam": "ipv4", 00:38:30.240 "trsvcid": "$NVMF_PORT", 00:38:30.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:30.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:30.240 "hdgst": ${hdgst:-false}, 00:38:30.240 "ddgst": ${ddgst:-false} 00:38:30.240 }, 00:38:30.240 "method": "bdev_nvme_attach_controller" 00:38:30.240 } 00:38:30.240 EOF 00:38:30.240 )") 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:30.240 03:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:30.240 "params": { 00:38:30.240 "name": "Nvme1", 00:38:30.240 "trtype": "tcp", 00:38:30.240 "traddr": "10.0.0.2", 00:38:30.240 "adrfam": "ipv4", 00:38:30.240 "trsvcid": "4420", 00:38:30.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:30.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:30.240 "hdgst": false, 00:38:30.240 "ddgst": false 00:38:30.240 }, 00:38:30.240 "method": "bdev_nvme_attach_controller" 00:38:30.240 }' 00:38:30.499 [2024-11-19 03:19:40.896154] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:30.499 [2024-11-19 03:19:40.896230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440415 ] 00:38:30.499 [2024-11-19 03:19:40.964140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.499 [2024-11-19 03:19:41.011316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:30.757 Running I/O for 10 seconds... 00:38:33.068 4920.00 IOPS, 38.44 MiB/s [2024-11-19T02:19:44.618Z] 4946.00 IOPS, 38.64 MiB/s [2024-11-19T02:19:45.555Z] 4974.67 IOPS, 38.86 MiB/s [2024-11-19T02:19:46.491Z] 4990.75 IOPS, 38.99 MiB/s [2024-11-19T02:19:47.426Z] 4998.40 IOPS, 39.05 MiB/s [2024-11-19T02:19:48.829Z] 5001.50 IOPS, 39.07 MiB/s [2024-11-19T02:19:49.457Z] 5002.00 IOPS, 39.08 MiB/s [2024-11-19T02:19:50.391Z] 4998.00 IOPS, 39.05 MiB/s [2024-11-19T02:19:51.768Z] 5000.67 IOPS, 39.07 MiB/s [2024-11-19T02:19:51.768Z] 4998.60 IOPS, 39.05 MiB/s 00:38:41.153 Latency(us) 00:38:41.153 [2024-11-19T02:19:51.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:41.153 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:38:41.153 Verification LBA range: start 0x0 length 0x1000 00:38:41.153 Nvme1n1 : 10.06 4981.31 38.92 0.00 0.00 25510.57 4199.16 46215.02 00:38:41.153 [2024-11-19T02:19:51.768Z] =================================================================================================================== 00:38:41.153 [2024-11-19T02:19:51.768Z] Total : 4981.31 38.92 0.00 0.00 25510.57 4199.16 46215.02 00:38:41.153 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=441629 00:38:41.153 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:38:41.153 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:41.153 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:38:41.153 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:38:41.153 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:41.153 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:41.153 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:41.153 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:41.153 { 00:38:41.153 "params": { 00:38:41.153 "name": "Nvme$subsystem", 00:38:41.153 "trtype": "$TEST_TRANSPORT", 00:38:41.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:41.153 "adrfam": "ipv4", 00:38:41.153 "trsvcid": "$NVMF_PORT", 00:38:41.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:41.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:41.153 "hdgst": ${hdgst:-false}, 00:38:41.153 "ddgst": ${ddgst:-false} 00:38:41.153 }, 00:38:41.153 "method": "bdev_nvme_attach_controller" 00:38:41.153 } 00:38:41.153 EOF 00:38:41.153 )") 00:38:41.153 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:41.153 [2024-11-19 03:19:51.636656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.153 [2024-11-19 03:19:51.636727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.153 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:41.153 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:41.153 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:41.153 "params": { 00:38:41.153 "name": "Nvme1", 00:38:41.153 "trtype": "tcp", 00:38:41.153 "traddr": "10.0.0.2", 00:38:41.153 "adrfam": "ipv4", 00:38:41.153 "trsvcid": "4420", 00:38:41.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:41.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:41.153 "hdgst": false, 00:38:41.153 "ddgst": false 00:38:41.153 }, 00:38:41.153 "method": "bdev_nvme_attach_controller" 00:38:41.153 }' 00:38:41.153 [2024-11-19 03:19:51.644577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.153 [2024-11-19 03:19:51.644601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.153 [2024-11-19 03:19:51.652577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.153 [2024-11-19 03:19:51.652605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.153 [2024-11-19 03:19:51.660576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.153 [2024-11-19 03:19:51.660598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.153 [2024-11-19 03:19:51.668576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.153 [2024-11-19 03:19:51.668598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.153 [2024-11-19 03:19:51.676576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.153 [2024-11-19 03:19:51.676598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.153 [2024-11-19 03:19:51.680040] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:41.153 [2024-11-19 03:19:51.680122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441629 ] 00:38:41.153 [2024-11-19 03:19:51.684575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.153 [2024-11-19 03:19:51.684598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.153 [2024-11-19 03:19:51.692575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.153 [2024-11-19 03:19:51.692597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.153 [2024-11-19 03:19:51.700575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.153 [2024-11-19 03:19:51.700597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.153 [2024-11-19 03:19:51.708576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.153 [2024-11-19 03:19:51.708598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.153 [2024-11-19 03:19:51.716579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.153 [2024-11-19 03:19:51.716601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.153 [2024-11-19 03:19:51.724578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.153 [2024-11-19 03:19:51.724600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.153 [2024-11-19 03:19:51.732581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.153 [2024-11-19 03:19:51.732605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.153 [2024-11-19 03:19:51.740578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.153 [2024-11-19 03:19:51.740601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.153 [2024-11-19 03:19:51.748579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.153 [2024-11-19 03:19:51.748601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.153 [2024-11-19 03:19:51.749460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.153 [2024-11-19 03:19:51.756627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.153 [2024-11-19 03:19:51.756664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.153 [2024-11-19 03:19:51.764629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.153 [2024-11-19 03:19:51.764685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.772591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.772618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.780599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.780631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.788581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.788604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.796579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.796603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.804580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.804603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.804650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.412 [2024-11-19 03:19:51.812579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.812602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.820613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.820657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.828624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.828663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.836628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.836697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.844632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.844696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.852624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.852680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.860629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.860697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.868624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.868663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.876581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.876606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.884623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.884663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.892630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.892698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.900627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.900683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.908581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.908604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.916581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.916613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.924583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.924608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.932583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.932614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.940581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.940605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.948580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.948604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.412 [2024-11-19 03:19:51.956585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.412 [2024-11-19 03:19:51.956608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.413 [2024-11-19 03:19:51.964578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.413 [2024-11-19 03:19:51.964601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.413 [2024-11-19 03:19:51.972578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.413 [2024-11-19 03:19:51.972600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.413 [2024-11-19 03:19:51.980580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.413 [2024-11-19 03:19:51.980603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.413 [2024-11-19 03:19:51.988581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.413 [2024-11-19 03:19:51.988606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.413 [2024-11-19 03:19:51.996588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.413 [2024-11-19 03:19:51.996615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.413 [2024-11-19 03:19:52.004578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.413 [2024-11-19 03:19:52.004602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.413 [2024-11-19 03:19:52.012578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.413 [2024-11-19 03:19:52.012602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.413 [2024-11-19 03:19:52.020588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.413 [2024-11-19 03:19:52.020616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.413 Running I/O for 5 seconds... 00:38:41.413 [2024-11-19 03:19:52.028664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.413 [2024-11-19 03:19:52.028704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.045463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.045503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.055132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.055159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.069893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.069921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.079019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.079061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.093539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.093567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.103024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.103050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.117880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.117907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.127574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.127600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.142604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.142631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.156642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.156686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.166533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.166560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.178719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.178762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.194273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.194302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.203477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.203503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.217557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.217584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.227325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.227354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.242121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.242147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.251954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.252007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.263712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.263753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.276324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.276353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.672 [2024-11-19 03:19:52.285728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.672 [2024-11-19 03:19:52.285763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.298251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.298277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.315433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.315458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.330885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.330913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.340260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.340286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.351914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.351956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.366455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.366496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.383871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.383898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.397159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.397188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.406521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.406546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.418340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.418366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.433084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.433126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.442427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.442453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.454330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.454356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.470250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.470276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.479885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.479913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.491569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.491595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.505259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.505288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.514567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.514593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.526837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.526873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.931 [2024-11-19 03:19:52.541030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.931 [2024-11-19 03:19:52.541072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.190 [2024-11-19 03:19:52.550375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.190 [2024-11-19 03:19:52.550401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.190 [2024-11-19 03:19:52.562108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.190 [2024-11-19 03:19:52.562134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.190 [2024-11-19 03:19:52.572205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.190 [2024-11-19 03:19:52.572230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.190 [2024-11-19 03:19:52.585007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.190 [2024-11-19 03:19:52.585035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.190 [2024-11-19 03:19:52.594934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.191 [2024-11-19 03:19:52.594961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.191 [2024-11-19 03:19:52.609178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.191 [2024-11-19 03:19:52.609205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.191 [2024-11-19 03:19:52.618716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.191 [2024-11-19 03:19:52.618754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.191 [2024-11-19 03:19:52.633544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.191 [2024-11-19 03:19:52.633570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.191 [2024-11-19 03:19:52.643911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.191 [2024-11-19 03:19:52.643938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.191 [2024-11-19 03:19:52.655687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.191 [2024-11-19 03:19:52.655723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.191 [2024-11-19 03:19:52.666461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.191 [2024-11-19 03:19:52.666502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.191 [2024-11-19 03:19:52.682204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.191 [2024-11-19 03:19:52.682231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.191 [2024-11-19 03:19:52.691629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.191 [2024-11-19 03:19:52.691655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.191 [2024-11-19 03:19:52.706446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.191 [2024-11-19 03:19:52.706472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.191 [2024-11-19 03:19:52.722928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.191 [2024-11-19 03:19:52.722957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.191 [2024-11-19 03:19:52.740597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.191 [2024-11-19 03:19:52.740638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.191 [2024-11-19 03:19:52.750708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.191 [2024-11-19 03:19:52.750735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.191 [2024-11-19 03:19:52.762507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.191 [2024-11-19 03:19:52.762546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.191 [2024-11-19 03:19:52.778857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.191 [2024-11-19 03:19:52.778885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.191 [2024-11-19 03:19:52.788368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.191 [2024-11-19 03:19:52.788402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.191 [2024-11-19 03:19:52.800364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.191 [2024-11-19 03:19:52.800392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:52.811718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:52.811762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:52.822759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:52.822788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:52.839125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:52.839162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:52.854612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:52.854655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:52.864437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:52.864463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:52.876019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:52.876058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:52.886219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:52.886244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:52.897584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:52.897610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:52.913555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:52.913581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:52.923141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:52.923167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:52.937144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:52.937170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:52.946490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:52.946516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:52.958392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:52.958418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:52.973108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:52.973151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:52.983039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:52.983065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:52.997157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:52.997184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:53.006251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:53.006277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:53.018254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:53.018287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 11627.00 IOPS, 90.84 MiB/s [2024-11-19T02:19:53.065Z] [2024-11-19 03:19:53.033407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:53.033433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:53.043047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:53.043074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:53.058016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:53.058058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.450 [2024-11-19 03:19:53.067186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.450 [2024-11-19 03:19:53.067215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.081181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.081206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.090458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.090485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.102572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.102599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.117774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.117802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.127579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.127618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.142362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.142387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.151758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.151786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.165568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.165594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.175200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.175226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.189064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.189091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.199141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.199167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.212895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.212924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.222138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.222167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.233895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.233921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.244684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.244746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.255667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.255718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.266694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.266743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.280779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.280822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.290121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.290146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.301747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.301774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.312165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.312191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.713 [2024-11-19 03:19:53.323351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.713 [2024-11-19 03:19:53.323392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.339041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.339084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.356579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.356607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.367052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.367080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.381736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.381779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.392542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.392568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.403837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.403866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.418012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.418041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.427610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.427637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.442416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.442443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.451705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.451733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.465228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.465254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.474263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.474290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.486242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.486270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.497298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.497324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.507993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.508021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.523290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.523316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.538876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.538919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.548150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.548175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.559909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.559937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.572480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.572511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.973 [2024-11-19 03:19:53.582173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.973 [2024-11-19 03:19:53.582200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.232 [2024-11-19 03:19:53.594931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.232 [2024-11-19 03:19:53.594961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.232 [2024-11-19 03:19:53.610503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.232 [2024-11-19 03:19:53.610530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.232 [2024-11-19 03:19:53.619516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.232 [2024-11-19 03:19:53.619544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.232 [2024-11-19 03:19:53.633253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.232 [2024-11-19 03:19:53.633280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.232 [2024-11-19 03:19:53.642885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.232 [2024-11-19 03:19:53.642914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.232 [2024-11-19 03:19:53.655102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.232 [2024-11-19 03:19:53.655129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.232 [2024-11-19 03:19:53.670006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.232 [2024-11-19 03:19:53.670050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.232 [2024-11-19 03:19:53.679546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.232 [2024-11-19 03:19:53.679572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.232 [2024-11-19 03:19:53.694666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.232 [2024-11-19 03:19:53.694716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.232 [2024-11-19 03:19:53.704280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.232 [2024-11-19 03:19:53.704307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.232 [2024-11-19 03:19:53.716096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.232 [2024-11-19 03:19:53.716122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.232 [2024-11-19 03:19:53.726975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.232 [2024-11-19 03:19:53.727004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.232 [2024-11-19 03:19:53.740850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.232 [2024-11-19 03:19:53.740879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.232 [2024-11-19 03:19:53.749956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.232 [2024-11-19 03:19:53.749999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.232 [2024-11-19 03:19:53.765713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.233 [2024-11-19 03:19:53.765755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.233 [2024-11-19 03:19:53.775457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.233 [2024-11-19 03:19:53.775484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.233 [2024-11-19 03:19:53.792103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.233 [2024-11-19 03:19:53.792132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.233 [2024-11-19 03:19:53.801517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.233 [2024-11-19 03:19:53.801544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.233 [2024-11-19 03:19:53.812921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.233 [2024-11-19 03:19:53.812949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.233 [2024-11-19 03:19:53.823773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.233 [2024-11-19 03:19:53.823800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.233 [2024-11-19 03:19:53.834682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.233 [2024-11-19 03:19:53.834732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.233 [2024-11-19 03:19:53.850676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.233 [2024-11-19 03:19:53.850717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:53.860605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:53.860632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:53.872267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:53.872294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:53.883172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:53.883199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:53.896371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:53.896399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:53.905809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:53.905836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:53.917814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:53.917843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:53.934066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:53.934092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:53.943477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:53.943503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:53.957384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:53.957411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:53.967168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:53.967196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:53.978942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:53.978971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:53.992147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:53.992175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:54.005965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:54.005993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:54.015740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:54.015769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:54.031417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:54.031445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 11624.50 IOPS, 90.82 MiB/s [2024-11-19T02:19:54.107Z] [2024-11-19 03:19:54.046302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:54.046331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:54.056154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:54.056195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:54.067801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:54.067830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:54.081264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:54.081292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:54.090699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:54.090729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.492 [2024-11-19 03:19:54.102024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.492 [2024-11-19 03:19:54.102066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.117683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.117734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.126871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.126900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.138736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.138763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.154759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.154797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.164235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.164263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.176413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.176440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.187304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.187330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.202461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.202489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.212039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.212067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.223453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.223480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.236041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.236069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.245665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.245715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.257683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.257717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.268561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.268602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.279454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.279480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.292567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.292597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.302325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.302353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.318196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.318222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.327535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.327561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.339655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.339706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.352994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.353037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.751 [2024-11-19 03:19:54.362610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.751 [2024-11-19 03:19:54.362637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.010 [2024-11-19 03:19:54.374933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.010 [2024-11-19 03:19:54.374991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.010 [2024-11-19 03:19:54.387592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.010 [2024-11-19 03:19:54.387622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.010 [2024-11-19 03:19:54.402955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.010 [2024-11-19 03:19:54.402984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.010 [2024-11-19 03:19:54.418924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.010 [2024-11-19 03:19:54.418952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.010 [2024-11-19 03:19:54.429582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.010 [2024-11-19 03:19:54.429608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.010 [2024-11-19 03:19:54.441355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.010 [2024-11-19 03:19:54.441382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.010 [2024-11-19 03:19:54.452511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.010 [2024-11-19 03:19:54.452537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.010 [2024-11-19 03:19:54.463567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.010 [2024-11-19 03:19:54.463608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.010 [2024-11-19 03:19:54.478922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.010 [2024-11-19 03:19:54.478951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.010 [2024-11-19 03:19:54.488514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.010 [2024-11-19 03:19:54.488542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.010 [2024-11-19 03:19:54.500301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.010 [2024-11-19 03:19:54.500329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.010 [2024-11-19 03:19:54.511000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.010 [2024-11-19 03:19:54.511043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.010 [2024-11-19 03:19:54.526068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.010 [2024-11-19 03:19:54.526096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.010 [2024-11-19 03:19:54.535201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.010 [2024-11-19 03:19:54.535230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.010 [2024-11-19 03:19:54.549282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.010 [2024-11-19 03:19:54.549311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.011 [2024-11-19 03:19:54.558650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.011 [2024-11-19 03:19:54.558701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.011 [2024-11-19 03:19:54.570648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.011 [2024-11-19 03:19:54.570697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.011 [2024-11-19 03:19:54.585182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.011 [2024-11-19 03:19:54.585211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.011 [2024-11-19 03:19:54.595030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.011 [2024-11-19 03:19:54.595071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.011 [2024-11-19 03:19:54.610157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.011 [2024-11-19 03:19:54.610194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.011 [2024-11-19 03:19:54.619764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.011 [2024-11-19 03:19:54.619793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.631648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.631703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.646248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.646276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.656333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.656359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.668870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.668900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.679868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.679896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.694149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.694176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.703294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.703320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.717399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.717427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.726851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.726880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.738965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.739005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.753919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.753947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.763248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.763274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.777030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.777071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.785845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.785873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.797662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.797711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.808149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.808175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.821062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.821091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.830182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.830217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.842076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.842103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.852961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.853002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.864496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.864538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.270 [2024-11-19 03:19:54.875365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.270 [2024-11-19 03:19:54.875391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:54.888631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:54.888658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:54.898385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:54.898412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:54.910319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:54.910344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:54.926847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:54.926875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:54.944844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:54.944871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:54.955048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:54.955087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:54.968539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:54.968565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:54.978262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:54.978288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:54.989704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:54.989732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:55.000794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:55.000822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:55.011851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:55.011878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:55.026648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:55.026697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:55.036165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:55.036190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 11633.33 IOPS, 90.89 MiB/s [2024-11-19T02:19:55.144Z] [2024-11-19 03:19:55.047984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:55.048010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:55.058260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:55.058285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:55.073750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:55.073777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:55.083215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:55.083241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:55.098013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:55.098038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:55.107091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:55.107117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:55.121675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:55.121724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:55.131578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:55.131605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.529 [2024-11-19 03:19:55.146598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.529 [2024-11-19 03:19:55.146625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.162581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.162608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.171567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.171593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.185775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.185802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.195280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.195307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.211230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.211258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.226961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.226990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.236770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.236798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.248597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.248623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.258682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.258731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.270440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.270468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.284985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.285014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.294992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.295019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.309710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.309736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.319630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.319658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.334313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.334341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.344174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.344201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.356051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.356077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.366895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.366923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.381332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.381358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.391003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.391029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.788 [2024-11-19 03:19:55.405966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.788 [2024-11-19 03:19:55.406017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.047 [2024-11-19 03:19:55.415732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.047 [2024-11-19 03:19:55.415760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.047 [2024-11-19 03:19:55.430116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.047 [2024-11-19 03:19:55.430142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.047 [2024-11-19 03:19:55.440598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.047 [2024-11-19 03:19:55.440639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.047 [2024-11-19 03:19:55.452010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.047 [2024-11-19 03:19:55.452053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.047 [2024-11-19 03:19:55.467724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.047 [2024-11-19 03:19:55.467767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.047 [2024-11-19 03:19:55.482061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.047 [2024-11-19 03:19:55.482104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.047 [2024-11-19 03:19:55.491524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.047 [2024-11-19 03:19:55.491563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.047 [2024-11-19 03:19:55.505189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.047 [2024-11-19 03:19:55.505215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.047 [2024-11-19 03:19:55.514322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.047 [2024-11-19 03:19:55.514355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.047 [2024-11-19 03:19:55.526110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.047 [2024-11-19 03:19:55.526138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.047 [2024-11-19 03:19:55.537061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.047 [2024-11-19 03:19:55.537086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.047 [2024-11-19 03:19:55.548248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.047 [2024-11-19 03:19:55.548273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.047 [2024-11-19 03:19:55.559014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.047 [2024-11-19 03:19:55.559039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.047 [2024-11-19 03:19:55.573717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.047 [2024-11-19 03:19:55.573746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.047 [2024-11-19 03:19:55.582780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.047 [2024-11-19 03:19:55.582808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.047 [2024-11-19 03:19:55.594433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.047 [2024-11-19 03:19:55.594458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.047 [2024-11-19 03:19:55.605292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.047 [2024-11-19 03:19:55.605318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.048 [2024-11-19 03:19:55.616000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.048 [2024-11-19 03:19:55.616028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.048 [2024-11-19 03:19:55.629981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.048 [2024-11-19 03:19:55.630007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.048 [2024-11-19 03:19:55.639097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.048 [2024-11-19 03:19:55.639122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.048 [2024-11-19 03:19:55.654567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.048 [2024-11-19 03:19:55.654593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.048 [2024-11-19 03:19:55.664590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.048 [2024-11-19 03:19:55.664618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.677119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.677147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.687923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.687962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.699639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.699678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.714895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.714938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.732453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.732479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.742715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.742770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.754830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.754858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.768063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.768092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.777675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.777725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.789411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.789437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.800374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.800398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.811411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.811436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.824196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.824239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.833594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.833621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.845457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.845497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.856267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.856292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.867001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.867026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.881478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.881505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.891111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.891136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.907333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.907359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.307 [2024-11-19 03:19:55.922655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.307 [2024-11-19 03:19:55.922685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:55.932494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:55.932522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:55.944343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:55.944384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:55.955594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:55.955619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:55.969624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:55.969657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:55.979273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:55.979299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:55.994608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:55.994633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:56.003908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:56.003937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:56.015784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:56.015812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:56.029037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:56.029065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:56.038590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:56.038616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 11642.50 IOPS, 90.96 MiB/s [2024-11-19T02:19:56.181Z] [2024-11-19 03:19:56.054720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:56.054747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:56.064155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:56.064180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:56.075999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:56.076024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:56.090293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:56.090321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:56.099904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:56.099932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:56.112083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:56.112108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:56.124786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:56.124814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:56.133520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:56.133546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:56.145580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:56.145620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:56.156602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:56.156629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:56.167617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:56.167643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.566 [2024-11-19 03:19:56.180349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.566 [2024-11-19 03:19:56.180379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.190272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.190300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.202075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.202101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.218126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.218153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.227789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.227817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.242906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.242934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.258518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.258546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.267952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.267993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.279457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.279482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.295448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.295474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.311070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.311095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.328592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.328619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.338579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.338603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.354296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.354322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.363931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.363958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.375775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.375804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.389840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.389882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.399405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.399431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.413360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.413386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.422878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.422906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.825 [2024-11-19 03:19:56.437218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.825 [2024-11-19 03:19:56.437242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.447623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.447649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.461531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.461573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.470877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.470905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.482616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.482642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.497455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.497484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.507156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.507181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.522588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.522614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.532101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.532130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.543638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.543663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.557628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.557671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.566685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.566719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.578140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.578165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.594330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.594355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.603910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.603937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.616064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.616090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.628905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.628933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.637958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.637998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.649625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.649650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.660492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.660519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.671224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.671252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.685330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.685358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.084 [2024-11-19 03:19:56.695085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.084 [2024-11-19 03:19:56.695111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.710408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.710436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.728220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.728248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.738385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.738414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.750211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.750239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.766096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.766125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.776052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.776093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.788106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.788147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.803116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.803159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.818730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.818759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.828779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.828806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.840791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.840819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.851703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.851744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.863978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.864006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.873614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.873642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.885508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.885533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.896417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.896443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.907205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.907231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.920907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.920935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.930552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.930576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.942283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.942309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.343 [2024-11-19 03:19:56.958919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.343 [2024-11-19 03:19:56.958947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:56.974825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:56.974868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:56.992534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:56.992559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.003448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.003473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.019233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.019259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.034987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.035032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 11652.80 IOPS, 91.04 MiB/s [2024-11-19T02:19:57.218Z] [2024-11-19 03:19:57.048937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.048974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 00:38:46.603 Latency(us) 00:38:46.603 [2024-11-19T02:19:57.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:46.603 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:38:46.603 Nvme1n1 : 5.01 11654.15 91.05 0.00 0.00 10967.62 3082.62 18350.08 00:38:46.603 [2024-11-19T02:19:57.218Z] =================================================================================================================== 00:38:46.603 [2024-11-19T02:19:57.218Z] Total : 11654.15 91.05 0.00 0.00 10967.62 3082.62 18350.08 00:38:46.603 [2024-11-19 03:19:57.056587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.056612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.064583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.064606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.072638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.072684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.080639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.080712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.088638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.088697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.096639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.096686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.104634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.104684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.112679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.112742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.120641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.120709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.128646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.128707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.136639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.136697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.144642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.144701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.152638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.152699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.160642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.160703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.168640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.168701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.176616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.176660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.184576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.184599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.192638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.192696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.200643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.200702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.208640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.208716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.603 [2024-11-19 03:19:57.216611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.603 [2024-11-19 03:19:57.216639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.862 [2024-11-19 03:19:57.224589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.862 [2024-11-19 03:19:57.224617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.862 [2024-11-19 03:19:57.232570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.862 [2024-11-19 03:19:57.232598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (441629) - No such process 00:38:46.862 03:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 441629 00:38:46.862 03:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:46.862 03:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.862 03:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:46.862 03:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.862 03:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:46.862 03:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.862 03:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:46.862 delay0 00:38:46.862 03:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.862 03:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:38:46.862 03:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.862 03:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:46.862 03:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.863 03:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:38:46.863 [2024-11-19 03:19:57.351660] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:54.971 [2024-11-19 03:20:04.482528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c3850 is same with the state(6) to be set 00:38:54.971 Initializing NVMe Controllers 00:38:54.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:54.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:54.971 Initialization complete. Launching workers. 00:38:54.971 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 219, failed: 26015 00:38:54.971 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26084, failed to submit 150 00:38:54.971 success 26015, unsuccessful 69, failed 0 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:54.971 rmmod nvme_tcp 00:38:54.971 rmmod nvme_fabrics 00:38:54.971 rmmod nvme_keyring 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 440300 ']' 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 440300 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 440300 ']' 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 440300 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 440300 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 440300' 00:38:54.971 killing process with pid 440300 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 440300 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 440300 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:54.971 03:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:56.349 03:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:56.349 00:38:56.349 real 0m28.640s 00:38:56.349 user 0m39.199s 00:38:56.349 sys 0m10.685s 00:38:56.349 03:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:56.349 03:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:56.349 ************************************ 00:38:56.349 END TEST nvmf_zcopy 00:38:56.349 ************************************ 00:38:56.349 03:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:56.349 03:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:56.349 03:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:56.349 03:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:56.349 ************************************ 00:38:56.349 START TEST nvmf_nmic 00:38:56.349 ************************************ 00:38:56.349 03:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:56.349 * Looking for test storage... 00:38:56.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:56.349 03:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:56.349 03:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:38:56.349 03:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:56.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.609 --rc genhtml_branch_coverage=1 00:38:56.609 --rc genhtml_function_coverage=1 00:38:56.609 --rc genhtml_legend=1 00:38:56.609 --rc geninfo_all_blocks=1 00:38:56.609 --rc geninfo_unexecuted_blocks=1 00:38:56.609 00:38:56.609 ' 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:56.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.609 --rc genhtml_branch_coverage=1 00:38:56.609 --rc genhtml_function_coverage=1 00:38:56.609 --rc genhtml_legend=1 00:38:56.609 --rc geninfo_all_blocks=1 00:38:56.609 --rc geninfo_unexecuted_blocks=1 00:38:56.609 00:38:56.609 ' 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:56.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.609 --rc genhtml_branch_coverage=1 00:38:56.609 --rc genhtml_function_coverage=1 00:38:56.609 --rc genhtml_legend=1 00:38:56.609 --rc geninfo_all_blocks=1 00:38:56.609 --rc geninfo_unexecuted_blocks=1 00:38:56.609 00:38:56.609 ' 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:56.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.609 --rc genhtml_branch_coverage=1 00:38:56.609 --rc genhtml_function_coverage=1 00:38:56.609 --rc genhtml_legend=1 00:38:56.609 --rc geninfo_all_blocks=1 00:38:56.609 --rc geninfo_unexecuted_blocks=1 00:38:56.609 00:38:56.609 ' 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:56.609 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:38:56.610 03:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:59.147 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:59.147 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:59.147 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.147 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:59.147 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:59.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:59.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:38:59.148 00:38:59.148 --- 10.0.0.2 ping statistics --- 00:38:59.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.148 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:59.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:59.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:38:59.148 00:38:59.148 --- 10.0.0.1 ping statistics --- 00:38:59.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.148 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=445003 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 445003 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 445003 ']' 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:59.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.148 [2024-11-19 03:20:09.376861] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:59.148 [2024-11-19 03:20:09.378076] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:38:59.148 [2024-11-19 03:20:09.378130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:59.148 [2024-11-19 03:20:09.450507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:59.148 [2024-11-19 03:20:09.496752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:59.148 [2024-11-19 03:20:09.496810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:59.148 [2024-11-19 03:20:09.496834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:59.148 [2024-11-19 03:20:09.496845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:59.148 [2024-11-19 03:20:09.496854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:59.148 [2024-11-19 03:20:09.498418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:59.148 [2024-11-19 03:20:09.498511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:59.148 [2024-11-19 03:20:09.498578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:59.148 [2024-11-19 03:20:09.498580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:59.148 [2024-11-19 03:20:09.578075] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:59.148 [2024-11-19 03:20:09.578324] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:59.148 [2024-11-19 03:20:09.578538] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:59.148 [2024-11-19 03:20:09.579118] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:59.148 [2024-11-19 03:20:09.579335] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.148 [2024-11-19 03:20:09.647247] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.148 Malloc0 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.148 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.149 [2024-11-19 03:20:09.715428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:38:59.149 test case1: single bdev can't be used in multiple subsystems 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.149 [2024-11-19 03:20:09.739166] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:38:59.149 [2024-11-19 03:20:09.739196] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:38:59.149 [2024-11-19 03:20:09.739210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.149 request: 00:38:59.149 { 00:38:59.149 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:38:59.149 "namespace": { 00:38:59.149 "bdev_name": "Malloc0", 00:38:59.149 "no_auto_visible": false 00:38:59.149 }, 00:38:59.149 "method": "nvmf_subsystem_add_ns", 00:38:59.149 "req_id": 1 00:38:59.149 } 00:38:59.149 Got JSON-RPC error response 00:38:59.149 response: 00:38:59.149 { 00:38:59.149 "code": -32602, 00:38:59.149 "message": "Invalid parameters" 00:38:59.149 } 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:38:59.149 Adding namespace failed - expected result. 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:38:59.149 test case2: host connect to nvmf target in multiple paths 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.149 [2024-11-19 03:20:09.747273] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.149 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:59.407 03:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:38:59.665 03:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:38:59.665 03:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:38:59.665 03:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:38:59.665 03:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:38:59.665 03:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:39:01.562 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:01.562 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:01.562 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:01.562 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:01.562 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:01.562 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:39:01.562 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:01.562 [global] 00:39:01.562 thread=1 00:39:01.562 invalidate=1 00:39:01.562 rw=write 00:39:01.562 time_based=1 00:39:01.562 runtime=1 00:39:01.562 ioengine=libaio 00:39:01.562 direct=1 00:39:01.562 bs=4096 00:39:01.563 iodepth=1 00:39:01.563 norandommap=0 00:39:01.563 numjobs=1 00:39:01.563 00:39:01.563 verify_dump=1 00:39:01.563 verify_backlog=512 00:39:01.563 verify_state_save=0 00:39:01.563 do_verify=1 00:39:01.563 verify=crc32c-intel 00:39:01.563 [job0] 00:39:01.563 filename=/dev/nvme0n1 00:39:01.821 Could not set queue depth (nvme0n1) 00:39:01.821 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:01.821 fio-3.35 00:39:01.821 Starting 1 thread 00:39:03.193 00:39:03.193 job0: (groupid=0, jobs=1): err= 0: pid=445503: Tue Nov 19 03:20:13 2024 00:39:03.193 read: IOPS=1826, BW=7305KiB/s (7480kB/s)(7312KiB/1001msec) 00:39:03.193 slat (nsec): min=6903, max=60370, avg=15077.32, stdev=5754.96 00:39:03.193 clat (usec): min=194, max=997, avg=278.36, stdev=85.04 00:39:03.193 lat (usec): min=202, max=1006, avg=293.43, stdev=87.18 00:39:03.193 clat percentiles (usec): 00:39:03.193 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 229], 20.00th=[ 237], 00:39:03.193 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:39:03.193 | 70.00th=[ 255], 80.00th=[ 302], 90.00th=[ 441], 95.00th=[ 490], 00:39:03.193 | 99.00th=[ 553], 99.50th=[ 594], 99.90th=[ 865], 99.95th=[ 996], 00:39:03.193 | 99.99th=[ 996] 00:39:03.193 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:39:03.193 slat (usec): min=9, max=28158, avg=34.03, stdev=621.81 00:39:03.193 clat (usec): min=142, max=371, avg=182.41, stdev=25.86 00:39:03.193 lat (usec): min=152, max=28478, avg=216.45, stdev=625.50 00:39:03.193 clat percentiles (usec): 00:39:03.193 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 163], 00:39:03.193 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 184], 00:39:03.193 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 227], 00:39:03.193 | 99.00th=[ 306], 99.50th=[ 310], 99.90th=[ 322], 99.95th=[ 367], 00:39:03.193 | 99.99th=[ 371] 00:39:03.193 bw ( KiB/s): min= 8288, max= 8288, per=100.00%, avg=8288.00, stdev= 0.00, samples=1 00:39:03.193 iops : min= 2072, max= 2072, avg=2072.00, stdev= 0.00, samples=1 00:39:03.193 lat (usec) : 250=80.62%, 500=17.91%, 750=1.37%, 1000=0.10% 00:39:03.193 cpu : usr=5.20%, sys=9.20%, ctx=3879, majf=0, minf=1 00:39:03.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:03.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.194 issued rwts: total=1828,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.194 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:03.194 00:39:03.194 Run status group 0 (all jobs): 00:39:03.194 READ: bw=7305KiB/s (7480kB/s), 7305KiB/s-7305KiB/s (7480kB/s-7480kB/s), io=7312KiB (7487kB), run=1001-1001msec 00:39:03.194 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:39:03.194 00:39:03.194 Disk stats (read/write): 00:39:03.194 nvme0n1: ios=1588/1938, merge=0/0, ticks=1237/318, in_queue=1555, util=98.70% 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:03.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:03.194 rmmod nvme_tcp 00:39:03.194 rmmod nvme_fabrics 00:39:03.194 rmmod nvme_keyring 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 445003 ']' 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 445003 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 445003 ']' 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 445003 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 445003 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 445003' 00:39:03.194 killing process with pid 445003 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 445003 00:39:03.194 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 445003 00:39:03.452 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:03.452 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:03.452 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:03.452 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:03.452 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:39:03.452 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:03.452 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:39:03.452 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:03.452 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:03.452 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:03.452 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:03.452 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:05.988 00:39:05.988 real 0m9.145s 00:39:05.988 user 0m16.927s 00:39:05.988 sys 0m3.537s 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:05.988 ************************************ 00:39:05.988 END TEST nvmf_nmic 00:39:05.988 ************************************ 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:05.988 ************************************ 00:39:05.988 START TEST nvmf_fio_target 00:39:05.988 ************************************ 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:05.988 * Looking for test storage... 00:39:05.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:05.988 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:05.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:05.989 --rc genhtml_branch_coverage=1 00:39:05.989 --rc genhtml_function_coverage=1 00:39:05.989 --rc genhtml_legend=1 00:39:05.989 --rc geninfo_all_blocks=1 00:39:05.989 --rc geninfo_unexecuted_blocks=1 00:39:05.989 00:39:05.989 ' 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:05.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:05.989 --rc genhtml_branch_coverage=1 00:39:05.989 --rc genhtml_function_coverage=1 00:39:05.989 --rc genhtml_legend=1 00:39:05.989 --rc geninfo_all_blocks=1 00:39:05.989 --rc geninfo_unexecuted_blocks=1 00:39:05.989 00:39:05.989 ' 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:05.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:05.989 --rc genhtml_branch_coverage=1 00:39:05.989 --rc genhtml_function_coverage=1 00:39:05.989 --rc genhtml_legend=1 00:39:05.989 --rc geninfo_all_blocks=1 00:39:05.989 --rc geninfo_unexecuted_blocks=1 00:39:05.989 00:39:05.989 ' 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:05.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:05.989 --rc genhtml_branch_coverage=1 00:39:05.989 --rc genhtml_function_coverage=1 00:39:05.989 --rc genhtml_legend=1 00:39:05.989 --rc geninfo_all_blocks=1 00:39:05.989 --rc geninfo_unexecuted_blocks=1 00:39:05.989 00:39:05.989 ' 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:05.989 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:05.990 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:05.990 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:05.990 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:05.990 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:05.990 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:07.893 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:07.893 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:07.893 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:07.893 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:07.893 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:07.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:07.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:39:07.894 00:39:07.894 --- 10.0.0.2 ping statistics --- 00:39:07.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:07.894 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:07.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:07.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:39:07.894 00:39:07.894 --- 10.0.0.1 ping statistics --- 00:39:07.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:07.894 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=447579 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 447579 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 447579 ']' 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:07.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:07.894 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:08.153 [2024-11-19 03:20:18.547309] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:08.153 [2024-11-19 03:20:18.548489] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:39:08.153 [2024-11-19 03:20:18.548560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:08.153 [2024-11-19 03:20:18.624178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:08.153 [2024-11-19 03:20:18.671639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:08.153 [2024-11-19 03:20:18.671721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:08.153 [2024-11-19 03:20:18.671736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:08.153 [2024-11-19 03:20:18.671748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:08.153 [2024-11-19 03:20:18.671772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:08.153 [2024-11-19 03:20:18.673343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:08.153 [2024-11-19 03:20:18.673452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:08.153 [2024-11-19 03:20:18.673547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:08.153 [2024-11-19 03:20:18.673555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.153 [2024-11-19 03:20:18.753908] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:08.153 [2024-11-19 03:20:18.754101] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:08.153 [2024-11-19 03:20:18.754412] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:08.153 [2024-11-19 03:20:18.755053] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:08.153 [2024-11-19 03:20:18.755265] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:08.411 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:08.411 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:39:08.411 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:08.411 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:08.411 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:08.411 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:08.411 03:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:08.670 [2024-11-19 03:20:19.050282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:08.671 03:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:08.929 03:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:08.929 03:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:09.187 03:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:09.187 03:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:09.445 03:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:09.445 03:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:09.703 03:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:09.703 03:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:09.961 03:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:10.529 03:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:10.529 03:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:10.787 03:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:10.787 03:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:11.045 03:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:11.045 03:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:11.304 03:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:11.562 03:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:11.562 03:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:11.821 03:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:11.821 03:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:12.078 03:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:12.336 [2024-11-19 03:20:22.938444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:12.593 03:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:12.850 03:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:13.108 03:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:13.365 03:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:13.365 03:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:39:13.365 03:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:13.365 03:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:39:13.365 03:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:39:13.365 03:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:39:15.263 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:15.263 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:15.263 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:15.263 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:39:15.263 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:15.263 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:39:15.263 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:15.263 [global] 00:39:15.263 thread=1 00:39:15.263 invalidate=1 00:39:15.263 rw=write 00:39:15.263 time_based=1 00:39:15.263 runtime=1 00:39:15.263 ioengine=libaio 00:39:15.263 direct=1 00:39:15.263 bs=4096 00:39:15.263 iodepth=1 00:39:15.263 norandommap=0 00:39:15.263 numjobs=1 00:39:15.263 00:39:15.263 verify_dump=1 00:39:15.263 verify_backlog=512 00:39:15.263 verify_state_save=0 00:39:15.263 do_verify=1 00:39:15.263 verify=crc32c-intel 00:39:15.263 [job0] 00:39:15.263 filename=/dev/nvme0n1 00:39:15.263 [job1] 00:39:15.263 filename=/dev/nvme0n2 00:39:15.263 [job2] 00:39:15.263 filename=/dev/nvme0n3 00:39:15.263 [job3] 00:39:15.263 filename=/dev/nvme0n4 00:39:15.263 Could not set queue depth (nvme0n1) 00:39:15.263 Could not set queue depth (nvme0n2) 00:39:15.263 Could not set queue depth (nvme0n3) 00:39:15.263 Could not set queue depth (nvme0n4) 00:39:15.521 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:15.521 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:15.521 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:15.521 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:15.521 fio-3.35 00:39:15.521 Starting 4 threads 00:39:16.894 00:39:16.894 job0: (groupid=0, jobs=1): err= 0: pid=448636: Tue Nov 19 03:20:27 2024 00:39:16.894 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:39:16.894 slat (nsec): min=4626, max=38830, avg=7620.73, stdev=4174.43 00:39:16.894 clat (usec): min=201, max=41189, avg=410.74, stdev=2165.49 00:39:16.895 lat (usec): min=211, max=41195, avg=418.36, stdev=2165.89 00:39:16.895 clat percentiles (usec): 00:39:16.895 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:39:16.895 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:39:16.895 | 70.00th=[ 262], 80.00th=[ 379], 90.00th=[ 478], 95.00th=[ 498], 00:39:16.895 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[41157], 99.95th=[41157], 00:39:16.895 | 99.99th=[41157] 00:39:16.895 write: IOPS=1668, BW=6673KiB/s (6833kB/s)(6680KiB/1001msec); 0 zone resets 00:39:16.895 slat (usec): min=5, max=1301, avg= 8.61, stdev=31.80 00:39:16.895 clat (usec): min=145, max=888, avg=201.22, stdev=60.33 00:39:16.895 lat (usec): min=151, max=1732, avg=209.84, stdev=71.37 00:39:16.895 clat percentiles (usec): 00:39:16.895 | 1.00th=[ 155], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 161], 00:39:16.895 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 180], 00:39:16.895 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 306], 00:39:16.895 | 99.00th=[ 412], 99.50th=[ 437], 99.90th=[ 685], 99.95th=[ 889], 00:39:16.895 | 99.99th=[ 889] 00:39:16.895 bw ( KiB/s): min= 4728, max= 4728, per=21.40%, avg=4728.00, stdev= 0.00, samples=1 00:39:16.895 iops : min= 1182, max= 1182, avg=1182.00, stdev= 0.00, samples=1 00:39:16.895 lat (usec) : 250=72.49%, 500=25.17%, 750=2.12%, 1000=0.06% 00:39:16.895 lat (msec) : 50=0.16% 00:39:16.895 cpu : usr=1.40%, sys=2.40%, ctx=3210, majf=0, minf=1 00:39:16.895 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:16.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.895 issued rwts: total=1536,1670,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:16.895 job1: (groupid=0, jobs=1): err= 0: pid=448640: Tue Nov 19 03:20:27 2024 00:39:16.895 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:39:16.895 slat (nsec): min=5016, max=36860, avg=6112.22, stdev=2361.47 00:39:16.895 clat (usec): min=197, max=873, avg=256.09, stdev=34.79 00:39:16.895 lat (usec): min=202, max=885, avg=262.21, stdev=35.07 00:39:16.895 clat percentiles (usec): 00:39:16.895 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:39:16.895 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:39:16.895 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 310], 00:39:16.895 | 99.00th=[ 383], 99.50th=[ 429], 99.90th=[ 578], 99.95th=[ 660], 00:39:16.895 | 99.99th=[ 873] 00:39:16.895 write: IOPS=2342, BW=9371KiB/s (9596kB/s)(9380KiB/1001msec); 0 zone resets 00:39:16.895 slat (nsec): min=6592, max=65282, avg=8049.94, stdev=2479.99 00:39:16.895 clat (usec): min=142, max=934, avg=185.39, stdev=47.54 00:39:16.895 lat (usec): min=149, max=943, avg=193.44, stdev=47.73 00:39:16.895 clat percentiles (usec): 00:39:16.895 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:39:16.895 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:39:16.895 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 212], 00:39:16.895 | 99.00th=[ 277], 99.50th=[ 660], 99.90th=[ 889], 99.95th=[ 914], 00:39:16.895 | 99.99th=[ 938] 00:39:16.895 bw ( KiB/s): min= 8912, max= 8912, per=40.34%, avg=8912.00, stdev= 0.00, samples=1 00:39:16.895 iops : min= 2228, max= 2228, avg=2228.00, stdev= 0.00, samples=1 00:39:16.895 lat (usec) : 250=76.58%, 500=22.99%, 750=0.30%, 1000=0.14% 00:39:16.895 cpu : usr=1.80%, sys=5.00%, ctx=4394, majf=0, minf=2 00:39:16.895 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:16.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.895 issued rwts: total=2048,2345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:16.895 job2: (groupid=0, jobs=1): err= 0: pid=448641: Tue Nov 19 03:20:27 2024 00:39:16.895 read: IOPS=26, BW=107KiB/s (110kB/s)(108KiB/1005msec) 00:39:16.895 slat (nsec): min=5433, max=29845, avg=13852.89, stdev=5390.79 00:39:16.895 clat (usec): min=371, max=41929, avg=33504.45, stdev=16080.06 00:39:16.895 lat (usec): min=378, max=41943, avg=33518.30, stdev=16082.51 00:39:16.895 clat percentiles (usec): 00:39:16.895 | 1.00th=[ 371], 5.00th=[ 375], 10.00th=[ 408], 20.00th=[41157], 00:39:16.895 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:16.895 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:16.895 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:39:16.895 | 99.99th=[41681] 00:39:16.895 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:39:16.895 slat (nsec): min=6649, max=24901, avg=7862.01, stdev=2205.07 00:39:16.895 clat (usec): min=161, max=589, avg=184.10, stdev=24.16 00:39:16.895 lat (usec): min=169, max=613, avg=191.96, stdev=24.95 00:39:16.895 clat percentiles (usec): 00:39:16.895 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 174], 00:39:16.895 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:39:16.895 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 212], 00:39:16.895 | 99.00th=[ 241], 99.50th=[ 251], 99.90th=[ 586], 99.95th=[ 586], 00:39:16.895 | 99.99th=[ 586] 00:39:16.895 bw ( KiB/s): min= 4096, max= 4096, per=18.54%, avg=4096.00, stdev= 0.00, samples=1 00:39:16.895 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:16.895 lat (usec) : 250=94.43%, 500=1.30%, 750=0.19% 00:39:16.895 lat (msec) : 50=4.08% 00:39:16.895 cpu : usr=0.10%, sys=0.50%, ctx=541, majf=0, minf=1 00:39:16.895 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:16.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.895 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:16.895 job3: (groupid=0, jobs=1): err= 0: pid=448642: Tue Nov 19 03:20:27 2024 00:39:16.895 read: IOPS=525, BW=2104KiB/s (2154kB/s)(2108KiB/1002msec) 00:39:16.895 slat (nsec): min=5498, max=20990, avg=6561.24, stdev=1874.40 00:39:16.895 clat (usec): min=196, max=41368, avg=1405.64, stdev=6784.44 00:39:16.895 lat (usec): min=202, max=41375, avg=1412.20, stdev=6785.55 00:39:16.895 clat percentiles (usec): 00:39:16.895 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 235], 00:39:16.895 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:39:16.895 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 281], 00:39:16.895 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:16.895 | 99.99th=[41157] 00:39:16.895 write: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec); 0 zone resets 00:39:16.895 slat (nsec): min=6298, max=63605, avg=9133.06, stdev=3401.01 00:39:16.895 clat (usec): min=162, max=908, avg=238.49, stdev=63.84 00:39:16.895 lat (usec): min=172, max=919, avg=247.62, stdev=64.42 00:39:16.895 clat percentiles (usec): 00:39:16.895 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 184], 00:39:16.895 | 30.00th=[ 194], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 247], 00:39:16.895 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 281], 95.00th=[ 318], 00:39:16.895 | 99.00th=[ 469], 99.50th=[ 529], 99.90th=[ 881], 99.95th=[ 906], 00:39:16.895 | 99.99th=[ 906] 00:39:16.895 bw ( KiB/s): min= 8192, max= 8192, per=37.08%, avg=8192.00, stdev= 0.00, samples=1 00:39:16.895 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:16.895 lat (usec) : 250=67.25%, 500=31.40%, 750=0.26%, 1000=0.13% 00:39:16.895 lat (msec) : 50=0.97% 00:39:16.895 cpu : usr=1.40%, sys=1.10%, ctx=1552, majf=0, minf=1 00:39:16.895 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:16.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.895 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:16.895 00:39:16.895 Run status group 0 (all jobs): 00:39:16.895 READ: bw=16.1MiB/s (16.9MB/s), 107KiB/s-8184KiB/s (110kB/s-8380kB/s), io=16.2MiB (16.9MB), run=1001-1005msec 00:39:16.895 WRITE: bw=21.6MiB/s (22.6MB/s), 2038KiB/s-9371KiB/s (2087kB/s-9596kB/s), io=21.7MiB (22.7MB), run=1001-1005msec 00:39:16.895 00:39:16.895 Disk stats (read/write): 00:39:16.895 nvme0n1: ios=1275/1536, merge=0/0, ticks=769/294, in_queue=1063, util=97.70% 00:39:16.895 nvme0n2: ios=1696/2048, merge=0/0, ticks=435/377, in_queue=812, util=86.78% 00:39:16.895 nvme0n3: ios=50/512, merge=0/0, ticks=1695/94, in_queue=1789, util=97.91% 00:39:16.895 nvme0n4: ios=523/1024, merge=0/0, ticks=575/242, in_queue=817, util=89.56% 00:39:16.895 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:16.895 [global] 00:39:16.895 thread=1 00:39:16.895 invalidate=1 00:39:16.895 rw=randwrite 00:39:16.895 time_based=1 00:39:16.895 runtime=1 00:39:16.895 ioengine=libaio 00:39:16.895 direct=1 00:39:16.895 bs=4096 00:39:16.895 iodepth=1 00:39:16.895 norandommap=0 00:39:16.895 numjobs=1 00:39:16.895 00:39:16.895 verify_dump=1 00:39:16.895 verify_backlog=512 00:39:16.895 verify_state_save=0 00:39:16.895 do_verify=1 00:39:16.895 verify=crc32c-intel 00:39:16.895 [job0] 00:39:16.895 filename=/dev/nvme0n1 00:39:16.895 [job1] 00:39:16.895 filename=/dev/nvme0n2 00:39:16.895 [job2] 00:39:16.895 filename=/dev/nvme0n3 00:39:16.895 [job3] 00:39:16.895 filename=/dev/nvme0n4 00:39:16.895 Could not set queue depth (nvme0n1) 00:39:16.895 Could not set queue depth (nvme0n2) 00:39:16.895 Could not set queue depth (nvme0n3) 00:39:16.895 Could not set queue depth (nvme0n4) 00:39:16.895 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:16.895 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:16.895 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:16.896 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:16.896 fio-3.35 00:39:16.896 Starting 4 threads 00:39:18.269 00:39:18.269 job0: (groupid=0, jobs=1): err= 0: pid=448868: Tue Nov 19 03:20:28 2024 00:39:18.269 read: IOPS=1594, BW=6378KiB/s (6531kB/s)(6384KiB/1001msec) 00:39:18.269 slat (nsec): min=6002, max=69121, avg=18396.55, stdev=11612.10 00:39:18.269 clat (usec): min=220, max=597, avg=311.66, stdev=72.50 00:39:18.269 lat (usec): min=241, max=604, avg=330.05, stdev=78.79 00:39:18.269 clat percentiles (usec): 00:39:18.269 | 1.00th=[ 245], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 265], 00:39:18.269 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:39:18.269 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 465], 95.00th=[ 498], 00:39:18.269 | 99.00th=[ 515], 99.50th=[ 523], 99.90th=[ 594], 99.95th=[ 594], 00:39:18.269 | 99.99th=[ 594] 00:39:18.269 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:39:18.269 slat (nsec): min=6566, max=65738, avg=12132.12, stdev=4976.08 00:39:18.269 clat (usec): min=155, max=1272, avg=211.09, stdev=49.87 00:39:18.269 lat (usec): min=166, max=1283, avg=223.23, stdev=50.32 00:39:18.269 clat percentiles (usec): 00:39:18.269 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:39:18.269 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:39:18.269 | 70.00th=[ 219], 80.00th=[ 237], 90.00th=[ 269], 95.00th=[ 281], 00:39:18.269 | 99.00th=[ 351], 99.50th=[ 404], 99.90th=[ 553], 99.95th=[ 1090], 00:39:18.269 | 99.99th=[ 1270] 00:39:18.269 bw ( KiB/s): min= 8192, max= 8192, per=31.42%, avg=8192.00, stdev= 0.00, samples=1 00:39:18.269 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:18.269 lat (usec) : 250=48.57%, 500=49.75%, 750=1.62% 00:39:18.269 lat (msec) : 2=0.05% 00:39:18.269 cpu : usr=3.40%, sys=5.70%, ctx=3649, majf=0, minf=1 00:39:18.269 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.269 issued rwts: total=1596,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.269 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:18.269 job1: (groupid=0, jobs=1): err= 0: pid=448869: Tue Nov 19 03:20:28 2024 00:39:18.269 read: IOPS=1993, BW=7972KiB/s (8163kB/s)(7980KiB/1001msec) 00:39:18.269 slat (nsec): min=4676, max=59618, avg=12028.72, stdev=5900.93 00:39:18.269 clat (usec): min=187, max=3920, avg=284.48, stdev=115.89 00:39:18.269 lat (usec): min=193, max=3935, avg=296.51, stdev=116.78 00:39:18.269 clat percentiles (usec): 00:39:18.269 | 1.00th=[ 194], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 225], 00:39:18.269 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 262], 00:39:18.270 | 70.00th=[ 277], 80.00th=[ 367], 90.00th=[ 429], 95.00th=[ 449], 00:39:18.270 | 99.00th=[ 523], 99.50th=[ 529], 99.90th=[ 758], 99.95th=[ 3916], 00:39:18.270 | 99.99th=[ 3916] 00:39:18.270 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:39:18.270 slat (nsec): min=6086, max=40301, avg=11979.50, stdev=4639.86 00:39:18.270 clat (usec): min=140, max=785, avg=180.13, stdev=34.42 00:39:18.270 lat (usec): min=149, max=798, avg=192.11, stdev=34.06 00:39:18.270 clat percentiles (usec): 00:39:18.270 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 163], 00:39:18.270 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:39:18.270 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 202], 95.00th=[ 229], 00:39:18.270 | 99.00th=[ 314], 99.50th=[ 355], 99.90th=[ 594], 99.95th=[ 709], 00:39:18.270 | 99.99th=[ 783] 00:39:18.270 bw ( KiB/s): min= 8440, max= 8440, per=32.37%, avg=8440.00, stdev= 0.00, samples=1 00:39:18.270 iops : min= 2110, max= 2110, avg=2110.00, stdev= 0.00, samples=1 00:39:18.270 lat (usec) : 250=74.52%, 500=24.51%, 750=0.89%, 1000=0.05% 00:39:18.270 lat (msec) : 4=0.02% 00:39:18.270 cpu : usr=2.60%, sys=5.20%, ctx=4045, majf=0, minf=1 00:39:18.270 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.270 issued rwts: total=1995,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.270 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:18.270 job2: (groupid=0, jobs=1): err= 0: pid=448870: Tue Nov 19 03:20:28 2024 00:39:18.270 read: IOPS=21, BW=86.2KiB/s (88.3kB/s)(88.0KiB/1021msec) 00:39:18.270 slat (nsec): min=14233, max=35777, avg=24170.27, stdev=9584.17 00:39:18.270 clat (usec): min=40588, max=41091, avg=40954.67, stdev=89.34 00:39:18.270 lat (usec): min=40604, max=41106, avg=40978.84, stdev=89.03 00:39:18.270 clat percentiles (usec): 00:39:18.270 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:18.270 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:18.270 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:18.270 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:18.270 | 99.99th=[41157] 00:39:18.270 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:39:18.270 slat (nsec): min=6168, max=39560, avg=10438.57, stdev=5249.20 00:39:18.270 clat (usec): min=148, max=1093, avg=218.56, stdev=93.82 00:39:18.270 lat (usec): min=156, max=1105, avg=229.00, stdev=94.87 00:39:18.270 clat percentiles (usec): 00:39:18.270 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 172], 00:39:18.270 | 30.00th=[ 178], 40.00th=[ 188], 50.00th=[ 198], 60.00th=[ 212], 00:39:18.270 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 277], 95.00th=[ 334], 00:39:18.270 | 99.00th=[ 685], 99.50th=[ 914], 99.90th=[ 1090], 99.95th=[ 1090], 00:39:18.270 | 99.99th=[ 1090] 00:39:18.270 bw ( KiB/s): min= 4096, max= 4096, per=15.71%, avg=4096.00, stdev= 0.00, samples=1 00:39:18.270 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:18.270 lat (usec) : 250=83.90%, 500=10.67%, 750=0.37%, 1000=0.56% 00:39:18.270 lat (msec) : 2=0.37%, 50=4.12% 00:39:18.270 cpu : usr=0.39%, sys=0.39%, ctx=536, majf=0, minf=1 00:39:18.270 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.270 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.270 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:18.270 job3: (groupid=0, jobs=1): err= 0: pid=448871: Tue Nov 19 03:20:28 2024 00:39:18.270 read: IOPS=1645, BW=6581KiB/s (6739kB/s)(6588KiB/1001msec) 00:39:18.270 slat (nsec): min=6250, max=63164, avg=13709.02, stdev=5735.30 00:39:18.270 clat (usec): min=196, max=766, avg=290.66, stdev=53.35 00:39:18.270 lat (usec): min=204, max=774, avg=304.37, stdev=57.34 00:39:18.270 clat percentiles (usec): 00:39:18.270 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 241], 00:39:18.270 | 30.00th=[ 247], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 297], 00:39:18.270 | 70.00th=[ 310], 80.00th=[ 343], 90.00th=[ 367], 95.00th=[ 383], 00:39:18.270 | 99.00th=[ 416], 99.50th=[ 441], 99.90th=[ 586], 99.95th=[ 766], 00:39:18.270 | 99.99th=[ 766] 00:39:18.270 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:39:18.270 slat (nsec): min=7692, max=70842, avg=14241.12, stdev=6768.67 00:39:18.270 clat (usec): min=152, max=1041, avg=221.38, stdev=52.61 00:39:18.270 lat (usec): min=165, max=1050, avg=235.62, stdev=54.44 00:39:18.270 clat percentiles (usec): 00:39:18.270 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 186], 00:39:18.270 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 215], 60.00th=[ 223], 00:39:18.270 | 70.00th=[ 233], 80.00th=[ 251], 90.00th=[ 273], 95.00th=[ 289], 00:39:18.270 | 99.00th=[ 367], 99.50th=[ 494], 99.90th=[ 947], 99.95th=[ 971], 00:39:18.270 | 99.99th=[ 1045] 00:39:18.270 bw ( KiB/s): min= 8192, max= 8192, per=31.42%, avg=8192.00, stdev= 0.00, samples=1 00:39:18.270 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:18.270 lat (usec) : 250=58.29%, 500=41.35%, 750=0.24%, 1000=0.08% 00:39:18.270 lat (msec) : 2=0.03% 00:39:18.270 cpu : usr=3.90%, sys=6.80%, ctx=3696, majf=0, minf=1 00:39:18.270 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.270 issued rwts: total=1647,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.270 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:18.270 00:39:18.270 Run status group 0 (all jobs): 00:39:18.270 READ: bw=20.1MiB/s (21.1MB/s), 86.2KiB/s-7972KiB/s (88.3kB/s-8163kB/s), io=20.5MiB (21.5MB), run=1001-1021msec 00:39:18.270 WRITE: bw=25.5MiB/s (26.7MB/s), 2006KiB/s-8184KiB/s (2054kB/s-8380kB/s), io=26.0MiB (27.3MB), run=1001-1021msec 00:39:18.270 00:39:18.270 Disk stats (read/write): 00:39:18.270 nvme0n1: ios=1442/1536, merge=0/0, ticks=1387/334, in_queue=1721, util=93.69% 00:39:18.270 nvme0n2: ios=1609/2048, merge=0/0, ticks=591/365, in_queue=956, util=97.56% 00:39:18.270 nvme0n3: ios=71/512, merge=0/0, ticks=896/105, in_queue=1001, util=97.49% 00:39:18.270 nvme0n4: ios=1489/1536, merge=0/0, ticks=567/348, in_queue=915, util=100.00% 00:39:18.270 03:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:18.270 [global] 00:39:18.270 thread=1 00:39:18.270 invalidate=1 00:39:18.270 rw=write 00:39:18.270 time_based=1 00:39:18.270 runtime=1 00:39:18.270 ioengine=libaio 00:39:18.270 direct=1 00:39:18.270 bs=4096 00:39:18.270 iodepth=128 00:39:18.270 norandommap=0 00:39:18.270 numjobs=1 00:39:18.270 00:39:18.270 verify_dump=1 00:39:18.270 verify_backlog=512 00:39:18.270 verify_state_save=0 00:39:18.270 do_verify=1 00:39:18.270 verify=crc32c-intel 00:39:18.270 [job0] 00:39:18.270 filename=/dev/nvme0n1 00:39:18.270 [job1] 00:39:18.270 filename=/dev/nvme0n2 00:39:18.270 [job2] 00:39:18.270 filename=/dev/nvme0n3 00:39:18.270 [job3] 00:39:18.270 filename=/dev/nvme0n4 00:39:18.270 Could not set queue depth (nvme0n1) 00:39:18.270 Could not set queue depth (nvme0n2) 00:39:18.270 Could not set queue depth (nvme0n3) 00:39:18.270 Could not set queue depth (nvme0n4) 00:39:18.528 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:18.528 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:18.528 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:18.528 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:18.528 fio-3.35 00:39:18.528 Starting 4 threads 00:39:19.901 00:39:19.901 job0: (groupid=0, jobs=1): err= 0: pid=449220: Tue Nov 19 03:20:30 2024 00:39:19.901 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:39:19.901 slat (usec): min=2, max=18714, avg=118.14, stdev=829.03 00:39:19.901 clat (usec): min=3642, max=60298, avg=15324.37, stdev=9955.71 00:39:19.901 lat (usec): min=3645, max=60301, avg=15442.51, stdev=10016.71 00:39:19.901 clat percentiles (usec): 00:39:19.901 | 1.00th=[ 6390], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[11338], 00:39:19.901 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[12518], 00:39:19.901 | 70.00th=[13173], 80.00th=[15270], 90.00th=[22414], 95.00th=[38536], 00:39:19.901 | 99.00th=[59507], 99.50th=[60031], 99.90th=[60556], 99.95th=[60556], 00:39:19.901 | 99.99th=[60556] 00:39:19.901 write: IOPS=3717, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1003msec); 0 zone resets 00:39:19.901 slat (usec): min=3, max=20832, avg=145.19, stdev=1052.33 00:39:19.901 clat (usec): min=457, max=56019, avg=19382.32, stdev=12737.16 00:39:19.901 lat (usec): min=3227, max=64705, avg=19527.51, stdev=12805.53 00:39:19.901 clat percentiles (usec): 00:39:19.901 | 1.00th=[ 3818], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10683], 00:39:19.901 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12518], 60.00th=[13960], 00:39:19.901 | 70.00th=[20055], 80.00th=[30540], 90.00th=[42206], 95.00th=[47449], 00:39:19.901 | 99.00th=[55837], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:39:19.901 | 99.99th=[55837] 00:39:19.901 bw ( KiB/s): min=12264, max=16544, per=22.95%, avg=14404.00, stdev=3026.42, samples=2 00:39:19.901 iops : min= 3066, max= 4136, avg=3601.00, stdev=756.60, samples=2 00:39:19.901 lat (usec) : 500=0.01% 00:39:19.901 lat (msec) : 2=0.01%, 4=1.31%, 10=10.98%, 20=66.98%, 50=17.09% 00:39:19.901 lat (msec) : 100=3.61% 00:39:19.901 cpu : usr=2.79%, sys=4.39%, ctx=340, majf=0, minf=1 00:39:19.901 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:39:19.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:19.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:19.901 issued rwts: total=3584,3729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:19.901 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:19.901 job1: (groupid=0, jobs=1): err= 0: pid=449221: Tue Nov 19 03:20:30 2024 00:39:19.901 read: IOPS=4169, BW=16.3MiB/s (17.1MB/s)(16.3MiB/1003msec) 00:39:19.901 slat (usec): min=2, max=12460, avg=101.21, stdev=681.10 00:39:19.901 clat (usec): min=988, max=38973, avg=13694.91, stdev=4698.32 00:39:19.901 lat (usec): min=2439, max=39164, avg=13796.12, stdev=4730.71 00:39:19.901 clat percentiles (usec): 00:39:19.901 | 1.00th=[ 4817], 5.00th=[ 7963], 10.00th=[ 9241], 20.00th=[10814], 00:39:19.901 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12518], 60.00th=[13566], 00:39:19.901 | 70.00th=[14746], 80.00th=[16909], 90.00th=[19006], 95.00th=[22414], 00:39:19.901 | 99.00th=[30802], 99.50th=[32113], 99.90th=[39060], 99.95th=[39060], 00:39:19.901 | 99.99th=[39060] 00:39:19.901 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:39:19.901 slat (usec): min=3, max=18870, avg=110.10, stdev=778.18 00:39:19.901 clat (usec): min=2405, max=38276, avg=15183.73, stdev=6921.32 00:39:19.901 lat (usec): min=2420, max=38280, avg=15293.82, stdev=6985.99 00:39:19.901 clat percentiles (usec): 00:39:19.901 | 1.00th=[ 5342], 5.00th=[ 8160], 10.00th=[ 9110], 20.00th=[10421], 00:39:19.901 | 30.00th=[11076], 40.00th=[12125], 50.00th=[13435], 60.00th=[15008], 00:39:19.901 | 70.00th=[15926], 80.00th=[17171], 90.00th=[27919], 95.00th=[31851], 00:39:19.901 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:39:19.901 | 99.99th=[38536] 00:39:19.901 bw ( KiB/s): min=18131, max=18368, per=29.07%, avg=18249.50, stdev=167.58, samples=2 00:39:19.902 iops : min= 4532, max= 4592, avg=4562.00, stdev=42.43, samples=2 00:39:19.902 lat (usec) : 1000=0.01% 00:39:19.902 lat (msec) : 4=0.57%, 10=13.87%, 20=74.14%, 50=11.41% 00:39:19.902 cpu : usr=4.29%, sys=6.59%, ctx=325, majf=0, minf=1 00:39:19.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:19.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:19.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:19.902 issued rwts: total=4182,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:19.902 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:19.902 job2: (groupid=0, jobs=1): err= 0: pid=449222: Tue Nov 19 03:20:30 2024 00:39:19.902 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:39:19.902 slat (usec): min=2, max=14313, avg=144.56, stdev=916.00 00:39:19.902 clat (usec): min=4629, max=40305, avg=19746.64, stdev=6581.96 00:39:19.902 lat (usec): min=4635, max=40322, avg=19891.20, stdev=6642.88 00:39:19.902 clat percentiles (usec): 00:39:19.902 | 1.00th=[ 4752], 5.00th=[11469], 10.00th=[12518], 20.00th=[14222], 00:39:19.902 | 30.00th=[15533], 40.00th=[16057], 50.00th=[17957], 60.00th=[21365], 00:39:19.902 | 70.00th=[23462], 80.00th=[25822], 90.00th=[28705], 95.00th=[32113], 00:39:19.902 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36963], 00:39:19.902 | 99.99th=[40109] 00:39:19.902 write: IOPS=3515, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1004msec); 0 zone resets 00:39:19.902 slat (usec): min=3, max=27708, avg=149.43, stdev=1028.83 00:39:19.902 clat (usec): min=2805, max=66633, avg=18251.32, stdev=8334.96 00:39:19.902 lat (usec): min=3902, max=66639, avg=18400.75, stdev=8393.80 00:39:19.902 clat percentiles (usec): 00:39:19.902 | 1.00th=[ 7570], 5.00th=[11469], 10.00th=[12649], 20.00th=[13173], 00:39:19.902 | 30.00th=[13566], 40.00th=[13960], 50.00th=[15664], 60.00th=[17695], 00:39:19.902 | 70.00th=[19268], 80.00th=[22676], 90.00th=[25560], 95.00th=[29230], 00:39:19.902 | 99.00th=[60031], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 00:39:19.902 | 99.99th=[66847] 00:39:19.902 bw ( KiB/s): min=13157, max=14040, per=21.66%, avg=13598.50, stdev=624.38, samples=2 00:39:19.902 iops : min= 3289, max= 3510, avg=3399.50, stdev=156.27, samples=2 00:39:19.902 lat (msec) : 4=0.08%, 10=3.03%, 20=61.24%, 50=34.53%, 100=1.12% 00:39:19.902 cpu : usr=3.09%, sys=6.28%, ctx=284, majf=0, minf=1 00:39:19.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:39:19.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:19.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:19.902 issued rwts: total=3072,3530,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:19.902 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:19.902 job3: (groupid=0, jobs=1): err= 0: pid=449223: Tue Nov 19 03:20:30 2024 00:39:19.902 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:39:19.902 slat (usec): min=2, max=20097, avg=133.54, stdev=865.82 00:39:19.902 clat (usec): min=9013, max=34531, avg=16347.19, stdev=5403.84 00:39:19.902 lat (usec): min=9021, max=38367, avg=16480.73, stdev=5468.64 00:39:19.902 clat percentiles (usec): 00:39:19.902 | 1.00th=[ 9634], 5.00th=[10683], 10.00th=[11731], 20.00th=[11994], 00:39:19.902 | 30.00th=[12518], 40.00th=[13304], 50.00th=[14615], 60.00th=[16057], 00:39:19.902 | 70.00th=[17957], 80.00th=[20055], 90.00th=[25297], 95.00th=[27657], 00:39:19.902 | 99.00th=[31589], 99.50th=[33162], 99.90th=[33424], 99.95th=[33817], 00:39:19.902 | 99.99th=[34341] 00:39:19.902 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1008msec); 0 zone resets 00:39:19.902 slat (usec): min=3, max=13449, avg=126.32, stdev=814.57 00:39:19.902 clat (usec): min=4499, max=88091, avg=17333.40, stdev=12308.47 00:39:19.902 lat (usec): min=6755, max=88096, avg=17459.72, stdev=12362.14 00:39:19.902 clat percentiles (usec): 00:39:19.902 | 1.00th=[ 8356], 5.00th=[10552], 10.00th=[11076], 20.00th=[11731], 00:39:19.902 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13304], 60.00th=[13829], 00:39:19.902 | 70.00th=[15401], 80.00th=[18482], 90.00th=[25035], 95.00th=[39584], 00:39:19.902 | 99.00th=[83362], 99.50th=[86508], 99.90th=[87557], 99.95th=[88605], 00:39:19.902 | 99.99th=[88605] 00:39:19.902 bw ( KiB/s): min=14826, max=15728, per=24.34%, avg=15277.00, stdev=637.81, samples=2 00:39:19.902 iops : min= 3706, max= 3932, avg=3819.00, stdev=159.81, samples=2 00:39:19.902 lat (msec) : 10=1.82%, 20=79.06%, 50=17.13%, 100=1.99% 00:39:19.902 cpu : usr=2.18%, sys=5.76%, ctx=232, majf=0, minf=2 00:39:19.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:19.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:19.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:19.902 issued rwts: total=3584,3951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:19.902 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:19.902 00:39:19.902 Run status group 0 (all jobs): 00:39:19.902 READ: bw=55.9MiB/s (58.6MB/s), 12.0MiB/s-16.3MiB/s (12.5MB/s-17.1MB/s), io=56.3MiB (59.1MB), run=1003-1008msec 00:39:19.902 WRITE: bw=61.3MiB/s (64.3MB/s), 13.7MiB/s-17.9MiB/s (14.4MB/s-18.8MB/s), io=61.8MiB (64.8MB), run=1003-1008msec 00:39:19.902 00:39:19.902 Disk stats (read/write): 00:39:19.902 nvme0n1: ios=2704/3072, merge=0/0, ticks=20714/32575, in_queue=53289, util=85.77% 00:39:19.902 nvme0n2: ios=3633/3628, merge=0/0, ticks=30955/37485, in_queue=68440, util=89.85% 00:39:19.902 nvme0n3: ios=2617/2828, merge=0/0, ticks=21003/16917, in_queue=37920, util=94.90% 00:39:19.902 nvme0n4: ios=3149/3584, merge=0/0, ticks=19843/18756, in_queue=38599, util=95.59% 00:39:19.902 03:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:19.902 [global] 00:39:19.902 thread=1 00:39:19.902 invalidate=1 00:39:19.902 rw=randwrite 00:39:19.902 time_based=1 00:39:19.902 runtime=1 00:39:19.902 ioengine=libaio 00:39:19.902 direct=1 00:39:19.902 bs=4096 00:39:19.902 iodepth=128 00:39:19.902 norandommap=0 00:39:19.902 numjobs=1 00:39:19.902 00:39:19.902 verify_dump=1 00:39:19.902 verify_backlog=512 00:39:19.902 verify_state_save=0 00:39:19.902 do_verify=1 00:39:19.902 verify=crc32c-intel 00:39:19.902 [job0] 00:39:19.902 filename=/dev/nvme0n1 00:39:19.902 [job1] 00:39:19.902 filename=/dev/nvme0n2 00:39:19.902 [job2] 00:39:19.902 filename=/dev/nvme0n3 00:39:19.902 [job3] 00:39:19.902 filename=/dev/nvme0n4 00:39:19.902 Could not set queue depth (nvme0n1) 00:39:19.902 Could not set queue depth (nvme0n2) 00:39:19.902 Could not set queue depth (nvme0n3) 00:39:19.902 Could not set queue depth (nvme0n4) 00:39:19.902 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:19.902 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:19.902 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:19.902 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:19.902 fio-3.35 00:39:19.902 Starting 4 threads 00:39:21.275 00:39:21.275 job0: (groupid=0, jobs=1): err= 0: pid=449449: Tue Nov 19 03:20:31 2024 00:39:21.275 read: IOPS=4823, BW=18.8MiB/s (19.8MB/s)(18.9MiB/1003msec) 00:39:21.275 slat (usec): min=2, max=3935, avg=95.21, stdev=457.12 00:39:21.275 clat (usec): min=525, max=15359, avg=12284.61, stdev=1360.50 00:39:21.275 lat (usec): min=3923, max=16946, avg=12379.82, stdev=1317.66 00:39:21.275 clat percentiles (usec): 00:39:21.275 | 1.00th=[ 8225], 5.00th=[10159], 10.00th=[10945], 20.00th=[11600], 00:39:21.275 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12387], 60.00th=[12780], 00:39:21.275 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13698], 95.00th=[14091], 00:39:21.275 | 99.00th=[14746], 99.50th=[15008], 99.90th=[15270], 99.95th=[15401], 00:39:21.275 | 99.99th=[15401] 00:39:21.275 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:39:21.275 slat (usec): min=4, max=31040, avg=98.48, stdev=630.23 00:39:21.275 clat (usec): min=6563, max=43937, avg=13015.07, stdev=4880.15 00:39:21.275 lat (usec): min=8715, max=43945, avg=13113.54, stdev=4879.11 00:39:21.275 clat percentiles (usec): 00:39:21.275 | 1.00th=[ 9503], 5.00th=[10421], 10.00th=[10945], 20.00th=[11338], 00:39:21.275 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:39:21.275 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[13960], 00:39:21.275 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:39:21.276 | 99.99th=[43779] 00:39:21.276 bw ( KiB/s): min=19976, max=20984, per=29.05%, avg=20480.00, stdev=712.76, samples=2 00:39:21.276 iops : min= 4994, max= 5246, avg=5120.00, stdev=178.19, samples=2 00:39:21.276 lat (usec) : 750=0.01% 00:39:21.276 lat (msec) : 4=0.09%, 10=3.92%, 20=94.71%, 50=1.28% 00:39:21.276 cpu : usr=4.09%, sys=6.59%, ctx=475, majf=0, minf=1 00:39:21.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:39:21.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:21.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:21.276 issued rwts: total=4838,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:21.276 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:21.276 job1: (groupid=0, jobs=1): err= 0: pid=449450: Tue Nov 19 03:20:31 2024 00:39:21.276 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:39:21.276 slat (usec): min=2, max=11725, avg=90.30, stdev=622.20 00:39:21.276 clat (usec): min=3279, max=63872, avg=11868.20, stdev=6663.85 00:39:21.276 lat (usec): min=3282, max=63876, avg=11958.49, stdev=6667.19 00:39:21.276 clat percentiles (usec): 00:39:21.276 | 1.00th=[ 4047], 5.00th=[ 5080], 10.00th=[ 6521], 20.00th=[ 9372], 00:39:21.276 | 30.00th=[10028], 40.00th=[10421], 50.00th=[11076], 60.00th=[11731], 00:39:21.276 | 70.00th=[12387], 80.00th=[13698], 90.00th=[15008], 95.00th=[16909], 00:39:21.276 | 99.00th=[58983], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:39:21.276 | 99.99th=[63701] 00:39:21.276 write: IOPS=5297, BW=20.7MiB/s (21.7MB/s)(20.9MiB/1009msec); 0 zone resets 00:39:21.276 slat (usec): min=3, max=27092, avg=95.67, stdev=720.15 00:39:21.276 clat (usec): min=2531, max=48752, avg=12436.21, stdev=5586.93 00:39:21.276 lat (usec): min=2536, max=48770, avg=12531.88, stdev=5633.08 00:39:21.276 clat percentiles (usec): 00:39:21.276 | 1.00th=[ 5080], 5.00th=[ 6652], 10.00th=[ 9241], 20.00th=[10552], 00:39:21.276 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:39:21.276 | 70.00th=[11994], 80.00th=[12387], 90.00th=[14222], 95.00th=[24773], 00:39:21.276 | 99.00th=[47973], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:39:21.276 | 99.99th=[48497] 00:39:21.276 bw ( KiB/s): min=20480, max=21264, per=29.61%, avg=20872.00, stdev=554.37, samples=2 00:39:21.276 iops : min= 5120, max= 5316, avg=5218.00, stdev=138.59, samples=2 00:39:21.276 lat (msec) : 4=0.28%, 10=20.50%, 20=75.08%, 50=3.55%, 100=0.60% 00:39:21.276 cpu : usr=2.08%, sys=4.66%, ctx=422, majf=0, minf=1 00:39:21.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:39:21.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:21.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:21.276 issued rwts: total=5120,5345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:21.276 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:21.276 job2: (groupid=0, jobs=1): err= 0: pid=449451: Tue Nov 19 03:20:31 2024 00:39:21.276 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:39:21.276 slat (usec): min=2, max=27036, avg=150.79, stdev=1123.18 00:39:21.276 clat (msec): min=5, max=109, avg=21.03, stdev=13.94 00:39:21.276 lat (msec): min=5, max=111, avg=21.18, stdev=14.03 00:39:21.276 clat percentiles (msec): 00:39:21.276 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 15], 00:39:21.276 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 17], 00:39:21.276 | 70.00th=[ 19], 80.00th=[ 22], 90.00th=[ 40], 95.00th=[ 52], 00:39:21.276 | 99.00th=[ 92], 99.50th=[ 97], 99.90th=[ 107], 99.95th=[ 107], 00:39:21.276 | 99.99th=[ 110] 00:39:21.276 write: IOPS=3485, BW=13.6MiB/s (14.3MB/s)(13.6MiB/1001msec); 0 zone resets 00:39:21.276 slat (usec): min=3, max=17307, avg=149.29, stdev=877.52 00:39:21.276 clat (usec): min=472, max=115362, avg=17722.94, stdev=14674.88 00:39:21.276 lat (msec): min=3, max=115, avg=17.87, stdev=14.79 00:39:21.276 clat percentiles (msec): 00:39:21.276 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 14], 00:39:21.276 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 16], 60.00th=[ 16], 00:39:21.276 | 70.00th=[ 17], 80.00th=[ 17], 90.00th=[ 20], 95.00th=[ 23], 00:39:21.276 | 99.00th=[ 110], 99.50th=[ 112], 99.90th=[ 116], 99.95th=[ 116], 00:39:21.276 | 99.99th=[ 116] 00:39:21.276 bw ( KiB/s): min=16384, max=16384, per=23.24%, avg=16384.00, stdev= 0.00, samples=1 00:39:21.276 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:39:21.276 lat (usec) : 500=0.02% 00:39:21.276 lat (msec) : 4=0.49%, 10=2.10%, 20=82.06%, 50=11.05%, 100=3.16% 00:39:21.276 lat (msec) : 250=1.13% 00:39:21.276 cpu : usr=1.50%, sys=3.00%, ctx=322, majf=0, minf=1 00:39:21.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:39:21.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:21.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:21.276 issued rwts: total=3072,3489,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:21.276 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:21.276 job3: (groupid=0, jobs=1): err= 0: pid=449452: Tue Nov 19 03:20:31 2024 00:39:21.276 read: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1014msec) 00:39:21.276 slat (usec): min=3, max=14384, avg=131.21, stdev=1057.39 00:39:21.276 clat (usec): min=5948, max=43901, avg=16231.44, stdev=4531.24 00:39:21.276 lat (usec): min=5956, max=43908, avg=16362.65, stdev=4631.00 00:39:21.276 clat percentiles (usec): 00:39:21.276 | 1.00th=[10159], 5.00th=[11338], 10.00th=[12125], 20.00th=[13173], 00:39:21.276 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14615], 00:39:21.276 | 70.00th=[17433], 80.00th=[19006], 90.00th=[23987], 95.00th=[25035], 00:39:21.276 | 99.00th=[29754], 99.50th=[36963], 99.90th=[43779], 99.95th=[43779], 00:39:21.276 | 99.99th=[43779] 00:39:21.276 write: IOPS=3863, BW=15.1MiB/s (15.8MB/s)(15.3MiB/1014msec); 0 zone resets 00:39:21.276 slat (usec): min=4, max=15218, avg=128.24, stdev=893.88 00:39:21.276 clat (usec): min=1502, max=77075, avg=17825.15, stdev=10909.91 00:39:21.276 lat (usec): min=1514, max=77084, avg=17953.39, stdev=10980.57 00:39:21.276 clat percentiles (usec): 00:39:21.276 | 1.00th=[ 4686], 5.00th=[ 9241], 10.00th=[11076], 20.00th=[12387], 00:39:21.276 | 30.00th=[13304], 40.00th=[13698], 50.00th=[14484], 60.00th=[15008], 00:39:21.276 | 70.00th=[17171], 80.00th=[20579], 90.00th=[26870], 95.00th=[37487], 00:39:21.276 | 99.00th=[71828], 99.50th=[72877], 99.90th=[77071], 99.95th=[77071], 00:39:21.276 | 99.99th=[77071] 00:39:21.276 bw ( KiB/s): min=13944, max=16384, per=21.51%, avg=15164.00, stdev=1725.34, samples=2 00:39:21.276 iops : min= 3486, max= 4096, avg=3791.00, stdev=431.34, samples=2 00:39:21.276 lat (msec) : 2=0.03%, 4=0.33%, 10=2.81%, 20=77.38%, 50=17.86% 00:39:21.276 lat (msec) : 100=1.59% 00:39:21.276 cpu : usr=2.57%, sys=5.23%, ctx=270, majf=0, minf=2 00:39:21.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:21.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:21.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:21.276 issued rwts: total=3584,3918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:21.276 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:21.276 00:39:21.276 Run status group 0 (all jobs): 00:39:21.276 READ: bw=64.0MiB/s (67.1MB/s), 12.0MiB/s-19.8MiB/s (12.6MB/s-20.8MB/s), io=64.9MiB (68.1MB), run=1001-1014msec 00:39:21.276 WRITE: bw=68.8MiB/s (72.2MB/s), 13.6MiB/s-20.7MiB/s (14.3MB/s-21.7MB/s), io=69.8MiB (73.2MB), run=1001-1014msec 00:39:21.276 00:39:21.276 Disk stats (read/write): 00:39:21.276 nvme0n1: ios=4132/4256, merge=0/0, ticks=13012/12811, in_queue=25823, util=97.29% 00:39:21.276 nvme0n2: ios=4173/4608, merge=0/0, ticks=27246/28336, in_queue=55582, util=96.55% 00:39:21.276 nvme0n3: ios=3124/3072, merge=0/0, ticks=27424/17214, in_queue=44638, util=98.65% 00:39:21.276 nvme0n4: ios=3130/3165, merge=0/0, ticks=50193/54928, in_queue=105121, util=98.01% 00:39:21.276 03:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:21.276 03:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=449589 00:39:21.276 03:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:21.276 03:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:21.276 [global] 00:39:21.276 thread=1 00:39:21.276 invalidate=1 00:39:21.276 rw=read 00:39:21.276 time_based=1 00:39:21.276 runtime=10 00:39:21.276 ioengine=libaio 00:39:21.276 direct=1 00:39:21.276 bs=4096 00:39:21.276 iodepth=1 00:39:21.276 norandommap=1 00:39:21.276 numjobs=1 00:39:21.276 00:39:21.276 [job0] 00:39:21.276 filename=/dev/nvme0n1 00:39:21.276 [job1] 00:39:21.276 filename=/dev/nvme0n2 00:39:21.276 [job2] 00:39:21.276 filename=/dev/nvme0n3 00:39:21.276 [job3] 00:39:21.276 filename=/dev/nvme0n4 00:39:21.276 Could not set queue depth (nvme0n1) 00:39:21.276 Could not set queue depth (nvme0n2) 00:39:21.276 Could not set queue depth (nvme0n3) 00:39:21.276 Could not set queue depth (nvme0n4) 00:39:21.276 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:21.276 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:21.276 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:21.276 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:21.276 fio-3.35 00:39:21.276 Starting 4 threads 00:39:24.576 03:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:24.576 03:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:24.576 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=299008, buflen=4096 00:39:24.576 fio: pid=449681, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:24.833 03:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:24.833 03:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:24.833 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=26755072, buflen=4096 00:39:24.833 fio: pid=449680, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:25.091 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=389120, buflen=4096 00:39:25.091 fio: pid=449678, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:25.091 03:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:25.091 03:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:25.350 03:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:25.350 03:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:25.350 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=57671680, buflen=4096 00:39:25.350 fio: pid=449679, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:25.350 00:39:25.350 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=449678: Tue Nov 19 03:20:35 2024 00:39:25.350 read: IOPS=27, BW=108KiB/s (111kB/s)(380KiB/3507msec) 00:39:25.350 slat (usec): min=9, max=9875, avg=203.57, stdev=1283.79 00:39:25.350 clat (usec): min=330, max=41373, avg=36464.31, stdev=12611.93 00:39:25.350 lat (usec): min=359, max=51039, avg=36669.86, stdev=12748.23 00:39:25.350 clat percentiles (usec): 00:39:25.350 | 1.00th=[ 330], 5.00th=[ 388], 10.00th=[ 4080], 20.00th=[41157], 00:39:25.350 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:25.350 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:25.350 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:25.350 | 99.99th=[41157] 00:39:25.350 bw ( KiB/s): min= 96, max= 176, per=0.51%, avg=110.67, stdev=32.17, samples=6 00:39:25.350 iops : min= 24, max= 44, avg=27.67, stdev= 8.04, samples=6 00:39:25.350 lat (usec) : 500=6.25%, 750=3.12% 00:39:25.350 lat (msec) : 10=1.04%, 20=1.04%, 50=87.50% 00:39:25.350 cpu : usr=0.00%, sys=0.11%, ctx=100, majf=0, minf=2 00:39:25.350 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:25.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:25.350 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:25.350 issued rwts: total=96,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:25.350 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:25.350 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=449679: Tue Nov 19 03:20:35 2024 00:39:25.350 read: IOPS=3676, BW=14.4MiB/s (15.1MB/s)(55.0MiB/3830msec) 00:39:25.350 slat (usec): min=4, max=27248, avg=13.43, stdev=285.37 00:39:25.350 clat (usec): min=187, max=41121, avg=254.70, stdev=907.63 00:39:25.350 lat (usec): min=192, max=53019, avg=268.12, stdev=987.38 00:39:25.350 clat percentiles (usec): 00:39:25.350 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 215], 00:39:25.350 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 00:39:25.350 | 70.00th=[ 237], 80.00th=[ 255], 90.00th=[ 277], 95.00th=[ 293], 00:39:25.350 | 99.00th=[ 338], 99.50th=[ 359], 99.90th=[ 424], 99.95th=[ 2180], 00:39:25.350 | 99.99th=[41157] 00:39:25.350 bw ( KiB/s): min=12632, max=17688, per=72.43%, avg=15720.00, stdev=1907.99, samples=7 00:39:25.350 iops : min= 3158, max= 4422, avg=3930.00, stdev=477.00, samples=7 00:39:25.350 lat (usec) : 250=77.47%, 500=22.44%, 750=0.02% 00:39:25.350 lat (msec) : 4=0.01%, 50=0.05% 00:39:25.350 cpu : usr=2.04%, sys=4.54%, ctx=14086, majf=0, minf=2 00:39:25.350 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:25.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:25.350 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:25.350 issued rwts: total=14081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:25.350 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:25.351 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=449680: Tue Nov 19 03:20:35 2024 00:39:25.351 read: IOPS=2032, BW=8127KiB/s (8322kB/s)(25.5MiB/3215msec) 00:39:25.351 slat (nsec): min=5350, max=71437, avg=12434.29, stdev=7170.74 00:39:25.351 clat (usec): min=203, max=41037, avg=473.07, stdev=2814.49 00:39:25.351 lat (usec): min=212, max=41051, avg=485.51, stdev=2814.69 00:39:25.351 clat percentiles (usec): 00:39:25.351 | 1.00th=[ 221], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 251], 00:39:25.351 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:39:25.351 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 343], 00:39:25.351 | 99.00th=[ 404], 99.50th=[ 1074], 99.90th=[41157], 99.95th=[41157], 00:39:25.351 | 99.99th=[41157] 00:39:25.351 bw ( KiB/s): min= 96, max=14888, per=35.76%, avg=7761.33, stdev=6383.44, samples=6 00:39:25.351 iops : min= 24, max= 3722, avg=1940.33, stdev=1595.86, samples=6 00:39:25.351 lat (usec) : 250=20.01%, 500=79.40%, 750=0.06%, 1000=0.02% 00:39:25.351 lat (msec) : 2=0.02%, 50=0.49% 00:39:25.351 cpu : usr=1.52%, sys=3.61%, ctx=6533, majf=0, minf=1 00:39:25.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:25.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:25.351 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:25.351 issued rwts: total=6533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:25.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:25.351 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=449681: Tue Nov 19 03:20:35 2024 00:39:25.351 read: IOPS=25, BW=99.5KiB/s (102kB/s)(292KiB/2934msec) 00:39:25.351 slat (nsec): min=13700, max=39643, avg=20385.74, stdev=7876.22 00:39:25.351 clat (usec): min=374, max=41016, avg=39852.42, stdev=6671.40 00:39:25.351 lat (usec): min=409, max=41038, avg=39872.83, stdev=6668.90 00:39:25.351 clat percentiles (usec): 00:39:25.351 | 1.00th=[ 375], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:39:25.351 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:25.351 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:25.351 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:25.351 | 99.99th=[41157] 00:39:25.351 bw ( KiB/s): min= 96, max= 112, per=0.46%, avg=100.80, stdev= 7.16, samples=5 00:39:25.351 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:39:25.351 lat (usec) : 500=2.70% 00:39:25.351 lat (msec) : 50=95.95% 00:39:25.351 cpu : usr=0.00%, sys=0.07%, ctx=75, majf=0, minf=1 00:39:25.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:25.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:25.351 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:25.351 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:25.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:25.351 00:39:25.351 Run status group 0 (all jobs): 00:39:25.351 READ: bw=21.2MiB/s (22.2MB/s), 99.5KiB/s-14.4MiB/s (102kB/s-15.1MB/s), io=81.2MiB (85.1MB), run=2934-3830msec 00:39:25.351 00:39:25.351 Disk stats (read/write): 00:39:25.351 nvme0n1: ios=133/0, merge=0/0, ticks=4383/0, in_queue=4383, util=99.23% 00:39:25.351 nvme0n2: ios=14074/0, merge=0/0, ticks=3244/0, in_queue=3244, util=95.15% 00:39:25.351 nvme0n3: ios=6206/0, merge=0/0, ticks=2943/0, in_queue=2943, util=96.82% 00:39:25.351 nvme0n4: ios=121/0, merge=0/0, ticks=2995/0, in_queue=2995, util=99.76% 00:39:25.610 03:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:25.610 03:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:25.868 03:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:25.868 03:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:26.126 03:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:26.126 03:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:26.384 03:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:26.384 03:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:26.642 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:26.642 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 449589 00:39:26.642 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:26.642 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:26.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:26.900 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:26.900 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:39:26.900 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:26.900 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:26.900 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:26.900 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:26.900 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:39:26.900 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:26.900 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:26.900 nvmf hotplug test: fio failed as expected 00:39:26.900 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:27.159 rmmod nvme_tcp 00:39:27.159 rmmod nvme_fabrics 00:39:27.159 rmmod nvme_keyring 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 447579 ']' 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 447579 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 447579 ']' 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 447579 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 447579 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 447579' 00:39:27.159 killing process with pid 447579 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 447579 00:39:27.159 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 447579 00:39:27.417 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:27.417 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:27.417 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:27.417 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:27.417 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:39:27.417 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:27.417 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:39:27.417 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:27.417 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:27.417 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:27.417 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:27.417 03:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:29.322 03:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:29.580 00:39:29.580 real 0m23.843s 00:39:29.580 user 1m7.428s 00:39:29.580 sys 0m10.281s 00:39:29.580 03:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:29.580 03:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:29.580 ************************************ 00:39:29.580 END TEST nvmf_fio_target 00:39:29.580 ************************************ 00:39:29.580 03:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:29.580 03:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:29.580 03:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:29.580 03:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:29.580 ************************************ 00:39:29.580 START TEST nvmf_bdevio 00:39:29.580 ************************************ 00:39:29.580 03:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:29.580 * Looking for test storage... 00:39:29.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:29.580 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:29.580 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:39:29.580 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:29.580 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:29.580 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:29.580 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:29.580 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:29.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:29.581 --rc genhtml_branch_coverage=1 00:39:29.581 --rc genhtml_function_coverage=1 00:39:29.581 --rc genhtml_legend=1 00:39:29.581 --rc geninfo_all_blocks=1 00:39:29.581 --rc geninfo_unexecuted_blocks=1 00:39:29.581 00:39:29.581 ' 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:29.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:29.581 --rc genhtml_branch_coverage=1 00:39:29.581 --rc genhtml_function_coverage=1 00:39:29.581 --rc genhtml_legend=1 00:39:29.581 --rc geninfo_all_blocks=1 00:39:29.581 --rc geninfo_unexecuted_blocks=1 00:39:29.581 00:39:29.581 ' 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:29.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:29.581 --rc genhtml_branch_coverage=1 00:39:29.581 --rc genhtml_function_coverage=1 00:39:29.581 --rc genhtml_legend=1 00:39:29.581 --rc geninfo_all_blocks=1 00:39:29.581 --rc geninfo_unexecuted_blocks=1 00:39:29.581 00:39:29.581 ' 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:29.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:29.581 --rc genhtml_branch_coverage=1 00:39:29.581 --rc genhtml_function_coverage=1 00:39:29.581 --rc genhtml_legend=1 00:39:29.581 --rc geninfo_all_blocks=1 00:39:29.581 --rc geninfo_unexecuted_blocks=1 00:39:29.581 00:39:29.581 ' 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:29.581 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:39:29.582 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:32.116 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:32.116 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:32.116 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:32.116 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:32.116 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:32.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:32.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:39:32.117 00:39:32.117 --- 10.0.0.2 ping statistics --- 00:39:32.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:32.117 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:32.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:32.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:39:32.117 00:39:32.117 --- 10.0.0.1 ping statistics --- 00:39:32.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:32.117 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=452305 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 452305 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 452305 ']' 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:32.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.117 [2024-11-19 03:20:42.379061] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:32.117 [2024-11-19 03:20:42.380116] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:39:32.117 [2024-11-19 03:20:42.380166] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:32.117 [2024-11-19 03:20:42.453199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:32.117 [2024-11-19 03:20:42.503178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:32.117 [2024-11-19 03:20:42.503243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:32.117 [2024-11-19 03:20:42.503268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:32.117 [2024-11-19 03:20:42.503295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:32.117 [2024-11-19 03:20:42.503305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:32.117 [2024-11-19 03:20:42.504866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:32.117 [2024-11-19 03:20:42.504894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:32.117 [2024-11-19 03:20:42.504947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:32.117 [2024-11-19 03:20:42.504949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:32.117 [2024-11-19 03:20:42.591493] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:32.117 [2024-11-19 03:20:42.591735] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:32.117 [2024-11-19 03:20:42.591955] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:32.117 [2024-11-19 03:20:42.592543] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:32.117 [2024-11-19 03:20:42.592795] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.117 [2024-11-19 03:20:42.649643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.117 Malloc0 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.117 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.118 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.118 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:32.118 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.118 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.118 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.118 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:32.118 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.118 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.118 [2024-11-19 03:20:42.726081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:32.118 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.118 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:39:32.118 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:39:32.118 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:39:32.118 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:39:32.118 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:32.118 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:32.118 { 00:39:32.118 "params": { 00:39:32.118 "name": "Nvme$subsystem", 00:39:32.118 "trtype": "$TEST_TRANSPORT", 00:39:32.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:32.118 "adrfam": "ipv4", 00:39:32.118 "trsvcid": "$NVMF_PORT", 00:39:32.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:32.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:32.118 "hdgst": ${hdgst:-false}, 00:39:32.118 "ddgst": ${ddgst:-false} 00:39:32.118 }, 00:39:32.118 "method": "bdev_nvme_attach_controller" 00:39:32.118 } 00:39:32.118 EOF 00:39:32.118 )") 00:39:32.376 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:39:32.376 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:39:32.376 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:39:32.376 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:32.376 "params": { 00:39:32.376 "name": "Nvme1", 00:39:32.376 "trtype": "tcp", 00:39:32.376 "traddr": "10.0.0.2", 00:39:32.376 "adrfam": "ipv4", 00:39:32.376 "trsvcid": "4420", 00:39:32.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:32.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:32.376 "hdgst": false, 00:39:32.376 "ddgst": false 00:39:32.376 }, 00:39:32.376 "method": "bdev_nvme_attach_controller" 00:39:32.376 }' 00:39:32.376 [2024-11-19 03:20:42.772096] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:39:32.376 [2024-11-19 03:20:42.772180] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452450 ] 00:39:32.376 [2024-11-19 03:20:42.840951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:32.376 [2024-11-19 03:20:42.890717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:32.376 [2024-11-19 03:20:42.890757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:32.376 [2024-11-19 03:20:42.890760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:32.634 I/O targets: 00:39:32.634 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:39:32.634 00:39:32.634 00:39:32.634 CUnit - A unit testing framework for C - Version 2.1-3 00:39:32.634 http://cunit.sourceforge.net/ 00:39:32.634 00:39:32.634 00:39:32.634 Suite: bdevio tests on: Nvme1n1 00:39:32.634 Test: blockdev write read block ...passed 00:39:32.634 Test: blockdev write zeroes read block ...passed 00:39:32.634 Test: blockdev write zeroes read no split ...passed 00:39:32.634 Test: blockdev write zeroes read split ...passed 00:39:32.891 Test: blockdev write zeroes read split partial ...passed 00:39:32.891 Test: blockdev reset ...[2024-11-19 03:20:43.285605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:39:32.891 [2024-11-19 03:20:43.285729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca2b70 (9): Bad file descriptor 00:39:32.891 [2024-11-19 03:20:43.290080] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:39:32.891 passed 00:39:32.891 Test: blockdev write read 8 blocks ...passed 00:39:32.891 Test: blockdev write read size > 128k ...passed 00:39:32.891 Test: blockdev write read invalid size ...passed 00:39:32.891 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:32.891 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:32.891 Test: blockdev write read max offset ...passed 00:39:32.891 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:32.891 Test: blockdev writev readv 8 blocks ...passed 00:39:32.891 Test: blockdev writev readv 30 x 1block ...passed 00:39:32.891 Test: blockdev writev readv block ...passed 00:39:32.891 Test: blockdev writev readv size > 128k ...passed 00:39:32.891 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:32.891 Test: blockdev comparev and writev ...[2024-11-19 03:20:43.500847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:32.891 [2024-11-19 03:20:43.500885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:32.891 [2024-11-19 03:20:43.500910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:32.891 [2024-11-19 03:20:43.500927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:32.891 [2024-11-19 03:20:43.501306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:32.891 [2024-11-19 03:20:43.501330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:39:32.891 [2024-11-19 03:20:43.501352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:32.891 [2024-11-19 03:20:43.501368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:32.891 [2024-11-19 03:20:43.501748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:32.891 [2024-11-19 03:20:43.501774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:39:32.891 [2024-11-19 03:20:43.501797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:32.891 [2024-11-19 03:20:43.501814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:32.892 [2024-11-19 03:20:43.502170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:32.892 [2024-11-19 03:20:43.502195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:32.892 [2024-11-19 03:20:43.502218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:32.892 [2024-11-19 03:20:43.502234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:33.149 passed 00:39:33.149 Test: blockdev nvme passthru rw ...passed 00:39:33.149 Test: blockdev nvme passthru vendor specific ...[2024-11-19 03:20:43.584932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:33.149 [2024-11-19 03:20:43.584961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:33.149 [2024-11-19 03:20:43.585106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:33.149 [2024-11-19 03:20:43.585128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:33.149 [2024-11-19 03:20:43.585274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:33.149 [2024-11-19 03:20:43.585296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:33.149 [2024-11-19 03:20:43.585440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:33.149 [2024-11-19 03:20:43.585473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:33.149 passed 00:39:33.149 Test: blockdev nvme admin passthru ...passed 00:39:33.149 Test: blockdev copy ...passed 00:39:33.149 00:39:33.149 Run Summary: Type Total Ran Passed Failed Inactive 00:39:33.149 suites 1 1 n/a 0 0 00:39:33.149 tests 23 23 23 0 0 00:39:33.149 asserts 152 152 152 0 n/a 00:39:33.149 00:39:33.149 Elapsed time = 1.000 seconds 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:33.408 rmmod nvme_tcp 00:39:33.408 rmmod nvme_fabrics 00:39:33.408 rmmod nvme_keyring 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 452305 ']' 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 452305 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 452305 ']' 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 452305 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 452305 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 452305' 00:39:33.408 killing process with pid 452305 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 452305 00:39:33.408 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 452305 00:39:33.666 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:33.666 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:33.666 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:33.666 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:39:33.666 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:39:33.666 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:33.666 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:39:33.666 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:33.666 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:33.667 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:33.667 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:33.667 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:35.570 03:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:35.570 00:39:35.570 real 0m6.190s 00:39:35.570 user 0m7.869s 00:39:35.570 sys 0m2.390s 00:39:35.570 03:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:35.570 03:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:35.570 ************************************ 00:39:35.570 END TEST nvmf_bdevio 00:39:35.570 ************************************ 00:39:35.830 03:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:35.830 00:39:35.830 real 3m53.523s 00:39:35.830 user 8m47.905s 00:39:35.830 sys 1m24.787s 00:39:35.830 03:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:35.830 03:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:35.830 ************************************ 00:39:35.830 END TEST nvmf_target_core_interrupt_mode 00:39:35.830 ************************************ 00:39:35.830 03:20:46 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:35.830 03:20:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:35.830 03:20:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:35.830 03:20:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:35.830 ************************************ 00:39:35.830 START TEST nvmf_interrupt 00:39:35.830 ************************************ 00:39:35.830 03:20:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:35.830 * Looking for test storage... 00:39:35.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:35.830 03:20:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:35.830 03:20:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:39:35.830 03:20:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:35.830 03:20:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:35.830 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:35.830 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:35.830 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:35.830 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:39:35.830 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:35.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.831 --rc genhtml_branch_coverage=1 00:39:35.831 --rc genhtml_function_coverage=1 00:39:35.831 --rc genhtml_legend=1 00:39:35.831 --rc geninfo_all_blocks=1 00:39:35.831 --rc geninfo_unexecuted_blocks=1 00:39:35.831 00:39:35.831 ' 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:35.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.831 --rc genhtml_branch_coverage=1 00:39:35.831 --rc genhtml_function_coverage=1 00:39:35.831 --rc genhtml_legend=1 00:39:35.831 --rc geninfo_all_blocks=1 00:39:35.831 --rc geninfo_unexecuted_blocks=1 00:39:35.831 00:39:35.831 ' 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:35.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.831 --rc genhtml_branch_coverage=1 00:39:35.831 --rc genhtml_function_coverage=1 00:39:35.831 --rc genhtml_legend=1 00:39:35.831 --rc geninfo_all_blocks=1 00:39:35.831 --rc geninfo_unexecuted_blocks=1 00:39:35.831 00:39:35.831 ' 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:35.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.831 --rc genhtml_branch_coverage=1 00:39:35.831 --rc genhtml_function_coverage=1 00:39:35.831 --rc genhtml_legend=1 00:39:35.831 --rc geninfo_all_blocks=1 00:39:35.831 --rc geninfo_unexecuted_blocks=1 00:39:35.831 00:39:35.831 ' 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:35.831 03:20:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:39:35.832 03:20:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:38.363 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:38.363 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:38.364 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:38.364 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:38.364 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:38.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:38.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:39:38.364 00:39:38.364 --- 10.0.0.2 ping statistics --- 00:39:38.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:38.364 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:38.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:38.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:39:38.364 00:39:38.364 --- 10.0.0.1 ping statistics --- 00:39:38.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:38.364 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=454537 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 454537 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 454537 ']' 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:38.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:38.364 03:20:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:38.364 [2024-11-19 03:20:48.844077] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:38.364 [2024-11-19 03:20:48.845150] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:39:38.364 [2024-11-19 03:20:48.845203] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:38.364 [2024-11-19 03:20:48.915313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:38.364 [2024-11-19 03:20:48.959476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:38.364 [2024-11-19 03:20:48.959544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:38.364 [2024-11-19 03:20:48.959567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:38.364 [2024-11-19 03:20:48.959577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:38.364 [2024-11-19 03:20:48.959587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:38.364 [2024-11-19 03:20:48.960888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:38.364 [2024-11-19 03:20:48.960894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.624 [2024-11-19 03:20:49.042913] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:38.624 [2024-11-19 03:20:49.042953] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:38.624 [2024-11-19 03:20:49.043178] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:38.624 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:38.624 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:39:38.624 03:20:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:38.624 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:38.624 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:38.624 03:20:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:38.624 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:39:38.624 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:39:38.625 5000+0 records in 00:39:38.625 5000+0 records out 00:39:38.625 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0138988 s, 737 MB/s 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:38.625 AIO0 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:38.625 [2024-11-19 03:20:49.161508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:38.625 [2024-11-19 03:20:49.189800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 454537 0 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 454537 0 idle 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=454537 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 454537 -w 256 00:39:38.625 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 454537 root 20 0 128.2g 46464 33792 S 0.0 0.1 0:00.25 reactor_0' 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 454537 root 20 0 128.2g 46464 33792 S 0.0 0.1 0:00.25 reactor_0 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 454537 1 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 454537 1 idle 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=454537 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 454537 -w 256 00:39:38.932 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:39.235 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 454541 root 20 0 128.2g 46464 33792 S 0.0 0.1 0:00.00 reactor_1' 00:39:39.235 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 454541 root 20 0 128.2g 46464 33792 S 0.0 0.1 0:00.00 reactor_1 00:39:39.235 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:39.235 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:39.235 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:39.235 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=454704 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 454537 0 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 454537 0 busy 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=454537 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 454537 -w 256 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 454537 root 20 0 128.2g 47232 33792 R 81.2 0.1 0:00.38 reactor_0' 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 454537 root 20 0 128.2g 47232 33792 R 81.2 0.1 0:00.38 reactor_0 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=81.2 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=81 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 454537 1 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 454537 1 busy 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=454537 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 454537 -w 256 00:39:39.236 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:39.517 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 454541 root 20 0 128.2g 47232 33792 R 99.9 0.1 0:00.23 reactor_1' 00:39:39.517 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 454541 root 20 0 128.2g 47232 33792 R 99.9 0.1 0:00.23 reactor_1 00:39:39.517 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:39.517 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:39.517 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:39.517 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:39.517 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:39.517 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:39.517 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:39.517 03:20:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:39.517 03:20:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 454704 00:39:49.487 Initializing NVMe Controllers 00:39:49.487 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:49.487 Controller IO queue size 256, less than required. 00:39:49.487 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:49.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:49.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:49.487 Initialization complete. Launching workers. 00:39:49.487 ======================================================== 00:39:49.487 Latency(us) 00:39:49.487 Device Information : IOPS MiB/s Average min max 00:39:49.487 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13693.70 53.49 18707.47 4443.05 22706.90 00:39:49.487 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13607.00 53.15 18826.82 3945.19 23813.93 00:39:49.487 ======================================================== 00:39:49.487 Total : 27300.69 106.64 18766.96 3945.19 23813.93 00:39:49.487 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 454537 0 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 454537 0 idle 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=454537 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 454537 -w 256 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 454537 root 20 0 128.2g 47232 33792 S 0.0 0.1 0:20.21 reactor_0' 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 454537 root 20 0 128.2g 47232 33792 S 0.0 0.1 0:20.21 reactor_0 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 454537 1 00:39:49.487 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 454537 1 idle 00:39:49.488 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=454537 00:39:49.488 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:49.488 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:49.488 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:49.488 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:49.488 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:49.488 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:49.488 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:49.488 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:49.488 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:49.488 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 454537 -w 256 00:39:49.488 03:20:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:49.488 03:21:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 454541 root 20 0 128.2g 47232 33792 S 0.0 0.1 0:09.97 reactor_1' 00:39:49.488 03:21:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 454541 root 20 0 128.2g 47232 33792 S 0.0 0.1 0:09.97 reactor_1 00:39:49.488 03:21:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:49.488 03:21:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:49.488 03:21:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:49.488 03:21:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:49.488 03:21:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:49.488 03:21:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:49.488 03:21:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:49.488 03:21:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:49.488 03:21:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:49.747 03:21:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:39:49.747 03:21:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:39:49.747 03:21:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:49.747 03:21:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:49.747 03:21:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 454537 0 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 454537 0 idle 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=454537 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 454537 -w 256 00:39:51.654 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 454537 root 20 0 128.2g 59520 33792 S 0.0 0.1 0:20.30 reactor_0' 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 454537 root 20 0 128.2g 59520 33792 S 0.0 0.1 0:20.30 reactor_0 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 454537 1 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 454537 1 idle 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=454537 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 454537 -w 256 00:39:51.912 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:52.170 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 454541 root 20 0 128.2g 59520 33792 S 0.0 0.1 0:10.01 reactor_1' 00:39:52.170 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 454541 root 20 0 128.2g 59520 33792 S 0.0 0.1 0:10.01 reactor_1 00:39:52.170 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:52.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:52.171 rmmod nvme_tcp 00:39:52.171 rmmod nvme_fabrics 00:39:52.171 rmmod nvme_keyring 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 454537 ']' 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 454537 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 454537 ']' 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 454537 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:52.171 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 454537 00:39:52.429 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:52.429 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:52.429 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 454537' 00:39:52.429 killing process with pid 454537 00:39:52.429 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 454537 00:39:52.429 03:21:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 454537 00:39:52.429 03:21:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:52.429 03:21:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:52.429 03:21:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:52.429 03:21:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:39:52.429 03:21:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:39:52.429 03:21:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:39:52.429 03:21:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:52.429 03:21:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:52.429 03:21:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:52.429 03:21:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:52.429 03:21:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:52.429 03:21:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:54.972 03:21:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:54.972 00:39:54.972 real 0m18.816s 00:39:54.972 user 0m37.132s 00:39:54.972 sys 0m6.414s 00:39:54.972 03:21:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:54.972 03:21:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:54.972 ************************************ 00:39:54.972 END TEST nvmf_interrupt 00:39:54.972 ************************************ 00:39:54.972 00:39:54.972 real 33m4.444s 00:39:54.972 user 87m49.196s 00:39:54.972 sys 8m1.130s 00:39:54.972 03:21:05 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:54.972 03:21:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:54.972 ************************************ 00:39:54.972 END TEST nvmf_tcp 00:39:54.972 ************************************ 00:39:54.972 03:21:05 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:39:54.972 03:21:05 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:54.972 03:21:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:54.972 03:21:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:54.972 03:21:05 -- common/autotest_common.sh@10 -- # set +x 00:39:54.972 ************************************ 00:39:54.972 START TEST spdkcli_nvmf_tcp 00:39:54.972 ************************************ 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:54.972 * Looking for test storage... 00:39:54.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:54.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.972 --rc genhtml_branch_coverage=1 00:39:54.972 --rc genhtml_function_coverage=1 00:39:54.972 --rc genhtml_legend=1 00:39:54.972 --rc geninfo_all_blocks=1 00:39:54.972 --rc geninfo_unexecuted_blocks=1 00:39:54.972 00:39:54.972 ' 00:39:54.972 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:54.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.972 --rc genhtml_branch_coverage=1 00:39:54.972 --rc genhtml_function_coverage=1 00:39:54.972 --rc genhtml_legend=1 00:39:54.973 --rc geninfo_all_blocks=1 00:39:54.973 --rc geninfo_unexecuted_blocks=1 00:39:54.973 00:39:54.973 ' 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:54.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.973 --rc genhtml_branch_coverage=1 00:39:54.973 --rc genhtml_function_coverage=1 00:39:54.973 --rc genhtml_legend=1 00:39:54.973 --rc geninfo_all_blocks=1 00:39:54.973 --rc geninfo_unexecuted_blocks=1 00:39:54.973 00:39:54.973 ' 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:54.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.973 --rc genhtml_branch_coverage=1 00:39:54.973 --rc genhtml_function_coverage=1 00:39:54.973 --rc genhtml_legend=1 00:39:54.973 --rc geninfo_all_blocks=1 00:39:54.973 --rc geninfo_unexecuted_blocks=1 00:39:54.973 00:39:54.973 ' 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:54.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=456696 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 456696 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 456696 ']' 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:54.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:54.973 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:54.973 [2024-11-19 03:21:05.356928] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:39:54.973 [2024-11-19 03:21:05.357052] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid456696 ] 00:39:54.973 [2024-11-19 03:21:05.431417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:54.973 [2024-11-19 03:21:05.484432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:54.973 [2024-11-19 03:21:05.484437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:55.232 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:55.232 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:39:55.232 03:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:39:55.232 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:55.232 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:55.232 03:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:39:55.232 03:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:39:55.232 03:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:39:55.232 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:55.232 03:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:55.232 03:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:39:55.232 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:39:55.232 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:39:55.232 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:39:55.232 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:39:55.232 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:39:55.232 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:39:55.232 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:55.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:39:55.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:39:55.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:55.232 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:55.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:39:55.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:55.232 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:55.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:39:55.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:55.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:55.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:55.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:55.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:39:55.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:39:55.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:55.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:39:55.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:55.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:39:55.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:39:55.232 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:39:55.232 ' 00:39:57.762 [2024-11-19 03:21:08.311263] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:59.135 [2024-11-19 03:21:09.575581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:01.665 [2024-11-19 03:21:11.914859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:03.566 [2024-11-19 03:21:13.916989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:04.941 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:04.941 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:04.941 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:04.941 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:04.941 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:04.941 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:04.941 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:04.941 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:04.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:04.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:04.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:04.941 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:04.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:04.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:04.941 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:04.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:04.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:04.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:04.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:04.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:04.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:04.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:04.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:04.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:04.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:04.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:04.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:04.941 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:05.200 03:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:05.200 03:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:05.200 03:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:05.200 03:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:05.200 03:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:05.200 03:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:05.200 03:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:05.200 03:21:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:05.458 03:21:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:05.716 03:21:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:05.717 03:21:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:05.717 03:21:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:05.717 03:21:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:05.717 03:21:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:05.717 03:21:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:05.717 03:21:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:05.717 03:21:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:05.717 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:05.717 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:05.717 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:05.717 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:05.717 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:05.717 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:05.717 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:05.717 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:05.717 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:05.717 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:05.717 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:05.717 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:05.717 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:05.717 ' 00:40:10.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:10.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:10.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:10.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:10.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:10.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:10.980 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:10.981 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:10.981 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:10.981 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:10.981 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:10.981 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:10.981 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:10.981 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:10.981 03:21:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:10.981 03:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:10.981 03:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:10.981 03:21:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 456696 00:40:10.981 03:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 456696 ']' 00:40:10.981 03:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 456696 00:40:10.981 03:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:40:10.981 03:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:10.981 03:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 456696 00:40:11.238 03:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:11.238 03:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:11.238 03:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 456696' 00:40:11.238 killing process with pid 456696 00:40:11.238 03:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 456696 00:40:11.238 03:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 456696 00:40:11.238 03:21:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:11.238 03:21:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:11.238 03:21:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 456696 ']' 00:40:11.238 03:21:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 456696 00:40:11.238 03:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 456696 ']' 00:40:11.238 03:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 456696 00:40:11.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (456696) - No such process 00:40:11.238 03:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 456696 is not found' 00:40:11.238 Process with pid 456696 is not found 00:40:11.238 03:21:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:11.238 03:21:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:11.238 03:21:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:11.238 00:40:11.238 real 0m16.655s 00:40:11.238 user 0m35.568s 00:40:11.239 sys 0m0.846s 00:40:11.239 03:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:11.239 03:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:11.239 ************************************ 00:40:11.239 END TEST spdkcli_nvmf_tcp 00:40:11.239 ************************************ 00:40:11.239 03:21:21 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:11.239 03:21:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:11.239 03:21:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:11.239 03:21:21 -- common/autotest_common.sh@10 -- # set +x 00:40:11.239 ************************************ 00:40:11.239 START TEST nvmf_identify_passthru 00:40:11.239 ************************************ 00:40:11.239 03:21:21 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:11.498 * Looking for test storage... 00:40:11.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:11.498 03:21:21 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:11.498 03:21:21 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:40:11.498 03:21:21 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:11.498 03:21:21 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:11.498 03:21:21 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:11.498 03:21:21 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:11.498 03:21:21 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:11.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.498 --rc genhtml_branch_coverage=1 00:40:11.498 --rc genhtml_function_coverage=1 00:40:11.498 --rc genhtml_legend=1 00:40:11.498 --rc geninfo_all_blocks=1 00:40:11.498 --rc geninfo_unexecuted_blocks=1 00:40:11.498 00:40:11.498 ' 00:40:11.498 03:21:21 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:11.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.498 --rc genhtml_branch_coverage=1 00:40:11.498 --rc genhtml_function_coverage=1 00:40:11.498 --rc genhtml_legend=1 00:40:11.498 --rc geninfo_all_blocks=1 00:40:11.498 --rc geninfo_unexecuted_blocks=1 00:40:11.498 00:40:11.498 ' 00:40:11.498 03:21:21 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:11.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.498 --rc genhtml_branch_coverage=1 00:40:11.498 --rc genhtml_function_coverage=1 00:40:11.498 --rc genhtml_legend=1 00:40:11.498 --rc geninfo_all_blocks=1 00:40:11.498 --rc geninfo_unexecuted_blocks=1 00:40:11.498 00:40:11.498 ' 00:40:11.498 03:21:21 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:11.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.498 --rc genhtml_branch_coverage=1 00:40:11.498 --rc genhtml_function_coverage=1 00:40:11.498 --rc genhtml_legend=1 00:40:11.498 --rc geninfo_all_blocks=1 00:40:11.498 --rc geninfo_unexecuted_blocks=1 00:40:11.498 00:40:11.498 ' 00:40:11.498 03:21:21 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:11.498 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:11.498 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:11.498 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:11.498 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:11.498 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:11.498 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:11.498 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:11.498 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:11.498 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:11.498 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:11.498 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:11.498 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:11.498 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:11.498 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:11.498 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:11.498 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:11.499 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:11.499 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:11.499 03:21:21 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:11.499 03:21:21 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:11.499 03:21:21 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:11.499 03:21:21 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:11.499 03:21:21 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.499 03:21:21 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.499 03:21:21 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.499 03:21:21 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:11.499 03:21:21 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.499 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:11.499 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:11.499 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:11.499 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:11.499 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:11.499 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:11.499 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:11.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:11.499 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:11.499 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:11.499 03:21:21 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:11.499 03:21:21 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:11.499 03:21:21 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:11.499 03:21:21 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:11.499 03:21:21 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:11.499 03:21:21 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:11.499 03:21:21 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.499 03:21:21 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.499 03:21:21 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.499 03:21:21 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:11.499 03:21:21 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.499 03:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:11.499 03:21:22 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:11.499 03:21:22 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:11.499 03:21:22 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:11.499 03:21:22 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:11.499 03:21:22 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:11.499 03:21:22 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:11.499 03:21:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:11.499 03:21:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:11.499 03:21:22 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:11.499 03:21:22 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:11.499 03:21:22 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:11.499 03:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:14.035 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:14.035 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:14.035 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:14.035 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:14.036 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:14.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:14.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:40:14.036 00:40:14.036 --- 10.0.0.2 ping statistics --- 00:40:14.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:14.036 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:14.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:14.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:40:14.036 00:40:14.036 --- 10.0.0.1 ping statistics --- 00:40:14.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:14.036 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:14.036 03:21:24 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:14.036 03:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:14.036 03:21:24 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:14.036 03:21:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:14.036 03:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:14.036 03:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:14.036 03:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:40:14.036 03:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:40:14.036 03:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:40:14.036 03:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:40:14.036 03:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:40:14.036 03:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:14.036 03:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:14.036 03:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:40:14.036 03:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:40:14.036 03:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:40:14.036 03:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:40:14.036 03:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:40:14.036 03:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:40:14.036 03:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:14.036 03:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:14.036 03:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:18.223 03:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:40:18.223 03:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:18.223 03:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:18.223 03:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:22.409 03:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:22.409 03:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:22.409 03:21:32 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:22.409 03:21:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.409 03:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:22.409 03:21:32 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:22.409 03:21:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.409 03:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=461824 00:40:22.409 03:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:22.409 03:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:22.409 03:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 461824 00:40:22.409 03:21:32 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 461824 ']' 00:40:22.409 03:21:32 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:22.409 03:21:32 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:22.409 03:21:32 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:22.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:22.409 03:21:32 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:22.409 03:21:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.409 [2024-11-19 03:21:32.728414] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:40:22.409 [2024-11-19 03:21:32.728500] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:22.409 [2024-11-19 03:21:32.802720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:22.409 [2024-11-19 03:21:32.851991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:22.409 [2024-11-19 03:21:32.852068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:22.409 [2024-11-19 03:21:32.852081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:22.409 [2024-11-19 03:21:32.852093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:22.409 [2024-11-19 03:21:32.852102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:22.409 [2024-11-19 03:21:32.853606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:22.409 [2024-11-19 03:21:32.853671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:22.409 [2024-11-19 03:21:32.853741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:22.409 [2024-11-19 03:21:32.853745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:22.409 03:21:32 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:22.409 03:21:32 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:40:22.409 03:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:22.409 03:21:32 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.409 03:21:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.409 INFO: Log level set to 20 00:40:22.409 INFO: Requests: 00:40:22.409 { 00:40:22.409 "jsonrpc": "2.0", 00:40:22.409 "method": "nvmf_set_config", 00:40:22.409 "id": 1, 00:40:22.409 "params": { 00:40:22.409 "admin_cmd_passthru": { 00:40:22.410 "identify_ctrlr": true 00:40:22.410 } 00:40:22.410 } 00:40:22.410 } 00:40:22.410 00:40:22.410 INFO: response: 00:40:22.410 { 00:40:22.410 "jsonrpc": "2.0", 00:40:22.410 "id": 1, 00:40:22.410 "result": true 00:40:22.410 } 00:40:22.410 00:40:22.410 03:21:32 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.410 03:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:22.410 03:21:32 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.410 03:21:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.410 INFO: Setting log level to 20 00:40:22.410 INFO: Setting log level to 20 00:40:22.410 INFO: Log level set to 20 00:40:22.410 INFO: Log level set to 20 00:40:22.410 INFO: Requests: 00:40:22.410 { 00:40:22.410 "jsonrpc": "2.0", 00:40:22.410 "method": "framework_start_init", 00:40:22.410 "id": 1 00:40:22.410 } 00:40:22.410 00:40:22.410 INFO: Requests: 00:40:22.410 { 00:40:22.410 "jsonrpc": "2.0", 00:40:22.410 "method": "framework_start_init", 00:40:22.410 "id": 1 00:40:22.410 } 00:40:22.410 00:40:22.668 [2024-11-19 03:21:33.067210] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:22.668 INFO: response: 00:40:22.668 { 00:40:22.668 "jsonrpc": "2.0", 00:40:22.668 "id": 1, 00:40:22.668 "result": true 00:40:22.668 } 00:40:22.668 00:40:22.668 INFO: response: 00:40:22.668 { 00:40:22.668 "jsonrpc": "2.0", 00:40:22.668 "id": 1, 00:40:22.668 "result": true 00:40:22.668 } 00:40:22.668 00:40:22.668 03:21:33 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.668 03:21:33 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:22.668 03:21:33 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.668 03:21:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.668 INFO: Setting log level to 40 00:40:22.668 INFO: Setting log level to 40 00:40:22.668 INFO: Setting log level to 40 00:40:22.668 [2024-11-19 03:21:33.077277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:22.668 03:21:33 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.668 03:21:33 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:22.668 03:21:33 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:22.668 03:21:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.668 03:21:33 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:40:22.668 03:21:33 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.668 03:21:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:25.947 Nvme0n1 00:40:25.947 03:21:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.947 03:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:25.947 03:21:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.947 03:21:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:25.947 03:21:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.947 03:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:25.947 03:21:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.947 03:21:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:25.947 03:21:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.947 03:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:25.947 03:21:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.947 03:21:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:25.947 [2024-11-19 03:21:35.969493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:25.947 03:21:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.947 03:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:25.947 03:21:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.947 03:21:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:25.947 [ 00:40:25.947 { 00:40:25.947 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:25.947 "subtype": "Discovery", 00:40:25.947 "listen_addresses": [], 00:40:25.947 "allow_any_host": true, 00:40:25.947 "hosts": [] 00:40:25.947 }, 00:40:25.947 { 00:40:25.947 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:25.947 "subtype": "NVMe", 00:40:25.947 "listen_addresses": [ 00:40:25.947 { 00:40:25.947 "trtype": "TCP", 00:40:25.947 "adrfam": "IPv4", 00:40:25.947 "traddr": "10.0.0.2", 00:40:25.947 "trsvcid": "4420" 00:40:25.947 } 00:40:25.947 ], 00:40:25.947 "allow_any_host": true, 00:40:25.947 "hosts": [], 00:40:25.947 "serial_number": "SPDK00000000000001", 00:40:25.947 "model_number": "SPDK bdev Controller", 00:40:25.947 "max_namespaces": 1, 00:40:25.947 "min_cntlid": 1, 00:40:25.947 "max_cntlid": 65519, 00:40:25.947 "namespaces": [ 00:40:25.947 { 00:40:25.947 "nsid": 1, 00:40:25.947 "bdev_name": "Nvme0n1", 00:40:25.947 "name": "Nvme0n1", 00:40:25.947 "nguid": "8057411922DE440B9E73A25114B30916", 00:40:25.947 "uuid": "80574119-22de-440b-9e73-a25114b30916" 00:40:25.947 } 00:40:25.947 ] 00:40:25.947 } 00:40:25.947 ] 00:40:25.947 03:21:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.947 03:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:25.947 03:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:25.947 03:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:25.947 03:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:40:25.947 03:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:25.947 03:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:25.947 03:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:25.947 03:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:40:25.947 03:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:40:25.947 03:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:40:25.947 03:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:25.947 03:21:36 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.947 03:21:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:25.947 03:21:36 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.947 03:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:25.947 03:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:25.947 03:21:36 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:25.947 03:21:36 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:40:25.947 03:21:36 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:25.947 03:21:36 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:40:25.948 03:21:36 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:25.948 03:21:36 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:25.948 rmmod nvme_tcp 00:40:25.948 rmmod nvme_fabrics 00:40:25.948 rmmod nvme_keyring 00:40:25.948 03:21:36 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:25.948 03:21:36 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:40:25.948 03:21:36 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:40:25.948 03:21:36 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 461824 ']' 00:40:25.948 03:21:36 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 461824 00:40:25.948 03:21:36 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 461824 ']' 00:40:25.948 03:21:36 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 461824 00:40:25.948 03:21:36 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:40:25.948 03:21:36 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:25.948 03:21:36 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 461824 00:40:25.948 03:21:36 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:25.948 03:21:36 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:25.948 03:21:36 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 461824' 00:40:25.948 killing process with pid 461824 00:40:25.948 03:21:36 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 461824 00:40:25.948 03:21:36 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 461824 00:40:27.850 03:21:37 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:27.850 03:21:37 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:27.850 03:21:37 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:27.850 03:21:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:40:27.850 03:21:37 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:40:27.850 03:21:37 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:27.850 03:21:37 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:40:27.850 03:21:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:27.850 03:21:37 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:27.850 03:21:37 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:27.850 03:21:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:27.850 03:21:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:29.755 03:21:40 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:29.755 00:40:29.755 real 0m18.193s 00:40:29.755 user 0m27.203s 00:40:29.755 sys 0m2.387s 00:40:29.755 03:21:40 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:29.755 03:21:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:29.755 ************************************ 00:40:29.755 END TEST nvmf_identify_passthru 00:40:29.755 ************************************ 00:40:29.755 03:21:40 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:29.755 03:21:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:29.755 03:21:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:29.755 03:21:40 -- common/autotest_common.sh@10 -- # set +x 00:40:29.755 ************************************ 00:40:29.755 START TEST nvmf_dif 00:40:29.755 ************************************ 00:40:29.755 03:21:40 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:29.755 * Looking for test storage... 00:40:29.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:29.755 03:21:40 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:29.755 03:21:40 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:40:29.755 03:21:40 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:29.755 03:21:40 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:40:29.755 03:21:40 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:29.755 03:21:40 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:29.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.755 --rc genhtml_branch_coverage=1 00:40:29.755 --rc genhtml_function_coverage=1 00:40:29.755 --rc genhtml_legend=1 00:40:29.755 --rc geninfo_all_blocks=1 00:40:29.755 --rc geninfo_unexecuted_blocks=1 00:40:29.755 00:40:29.755 ' 00:40:29.755 03:21:40 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:29.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.755 --rc genhtml_branch_coverage=1 00:40:29.755 --rc genhtml_function_coverage=1 00:40:29.755 --rc genhtml_legend=1 00:40:29.755 --rc geninfo_all_blocks=1 00:40:29.755 --rc geninfo_unexecuted_blocks=1 00:40:29.755 00:40:29.755 ' 00:40:29.755 03:21:40 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:29.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.755 --rc genhtml_branch_coverage=1 00:40:29.755 --rc genhtml_function_coverage=1 00:40:29.755 --rc genhtml_legend=1 00:40:29.755 --rc geninfo_all_blocks=1 00:40:29.755 --rc geninfo_unexecuted_blocks=1 00:40:29.755 00:40:29.755 ' 00:40:29.755 03:21:40 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:29.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.755 --rc genhtml_branch_coverage=1 00:40:29.755 --rc genhtml_function_coverage=1 00:40:29.755 --rc genhtml_legend=1 00:40:29.755 --rc geninfo_all_blocks=1 00:40:29.755 --rc geninfo_unexecuted_blocks=1 00:40:29.755 00:40:29.755 ' 00:40:29.755 03:21:40 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:29.755 03:21:40 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:29.755 03:21:40 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:29.755 03:21:40 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.755 03:21:40 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.756 03:21:40 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.756 03:21:40 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:29.756 03:21:40 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:29.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:29.756 03:21:40 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:29.756 03:21:40 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:29.756 03:21:40 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:29.756 03:21:40 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:29.756 03:21:40 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:29.756 03:21:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:29.756 03:21:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:29.756 03:21:40 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:40:29.756 03:21:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:31.657 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:31.657 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:31.657 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:31.657 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:31.657 03:21:42 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:31.916 03:21:42 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:31.916 03:21:42 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:31.916 03:21:42 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:31.916 03:21:42 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:31.916 03:21:42 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:31.916 03:21:42 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:31.916 03:21:42 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:31.916 03:21:42 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:31.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:31.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:40:31.916 00:40:31.916 --- 10.0.0.2 ping statistics --- 00:40:31.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:31.916 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:40:31.916 03:21:42 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:31.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:31.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:40:31.916 00:40:31.916 --- 10.0.0.1 ping statistics --- 00:40:31.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:31.916 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:40:31.916 03:21:42 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:31.916 03:21:42 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:40:31.916 03:21:42 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:40:31.916 03:21:42 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:32.850 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:32.850 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:40:32.850 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:32.850 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:32.850 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:32.850 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:32.850 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:32.850 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:32.850 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:32.850 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:32.850 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:32.850 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:32.850 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:32.850 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:32.850 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:32.850 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:32.850 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:33.109 03:21:43 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:33.109 03:21:43 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:33.109 03:21:43 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:33.109 03:21:43 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:33.109 03:21:43 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:33.110 03:21:43 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:33.110 03:21:43 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:33.110 03:21:43 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:33.110 03:21:43 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:33.110 03:21:43 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:33.110 03:21:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:33.110 03:21:43 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=465098 00:40:33.110 03:21:43 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:33.110 03:21:43 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 465098 00:40:33.110 03:21:43 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 465098 ']' 00:40:33.110 03:21:43 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:33.110 03:21:43 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:33.110 03:21:43 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:33.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:33.110 03:21:43 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:33.110 03:21:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:33.369 [2024-11-19 03:21:43.729797] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:40:33.369 [2024-11-19 03:21:43.729907] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:33.369 [2024-11-19 03:21:43.802121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:33.369 [2024-11-19 03:21:43.848888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:33.369 [2024-11-19 03:21:43.848960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:33.369 [2024-11-19 03:21:43.848974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:33.369 [2024-11-19 03:21:43.848985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:33.369 [2024-11-19 03:21:43.848995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:33.369 [2024-11-19 03:21:43.849567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:33.369 03:21:43 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:33.369 03:21:43 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:40:33.369 03:21:43 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:33.369 03:21:43 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:33.369 03:21:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:33.628 03:21:43 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:33.628 03:21:43 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:33.628 03:21:43 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:33.628 03:21:43 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.628 03:21:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:33.628 [2024-11-19 03:21:43.993401] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:33.628 03:21:43 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.628 03:21:43 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:33.628 03:21:43 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:33.628 03:21:43 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:33.628 03:21:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:33.628 ************************************ 00:40:33.628 START TEST fio_dif_1_default 00:40:33.628 ************************************ 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:33.628 bdev_null0 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:33.628 [2024-11-19 03:21:44.049709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:33.628 { 00:40:33.628 "params": { 00:40:33.628 "name": "Nvme$subsystem", 00:40:33.628 "trtype": "$TEST_TRANSPORT", 00:40:33.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:33.628 "adrfam": "ipv4", 00:40:33.628 "trsvcid": "$NVMF_PORT", 00:40:33.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:33.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:33.628 "hdgst": ${hdgst:-false}, 00:40:33.628 "ddgst": ${ddgst:-false} 00:40:33.628 }, 00:40:33.628 "method": "bdev_nvme_attach_controller" 00:40:33.628 } 00:40:33.628 EOF 00:40:33.628 )") 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:33.628 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:33.629 "params": { 00:40:33.629 "name": "Nvme0", 00:40:33.629 "trtype": "tcp", 00:40:33.629 "traddr": "10.0.0.2", 00:40:33.629 "adrfam": "ipv4", 00:40:33.629 "trsvcid": "4420", 00:40:33.629 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:33.629 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:33.629 "hdgst": false, 00:40:33.629 "ddgst": false 00:40:33.629 }, 00:40:33.629 "method": "bdev_nvme_attach_controller" 00:40:33.629 }' 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:33.629 03:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:33.887 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:33.887 fio-3.35 00:40:33.887 Starting 1 thread 00:40:46.081 00:40:46.081 filename0: (groupid=0, jobs=1): err= 0: pid=465328: Tue Nov 19 03:21:54 2024 00:40:46.081 read: IOPS=196, BW=785KiB/s (803kB/s)(7856KiB/10013msec) 00:40:46.081 slat (nsec): min=4098, max=67393, avg=9309.31, stdev=2817.75 00:40:46.081 clat (usec): min=515, max=42389, avg=20363.82, stdev=20295.03 00:40:46.081 lat (usec): min=523, max=42402, avg=20373.13, stdev=20295.14 00:40:46.081 clat percentiles (usec): 00:40:46.081 | 1.00th=[ 562], 5.00th=[ 578], 10.00th=[ 594], 20.00th=[ 635], 00:40:46.081 | 30.00th=[ 676], 40.00th=[ 734], 50.00th=[ 766], 60.00th=[41157], 00:40:46.081 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:46.081 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:46.081 | 99.99th=[42206] 00:40:46.081 bw ( KiB/s): min= 704, max= 960, per=99.80%, avg=784.00, stdev=54.44, samples=20 00:40:46.081 iops : min= 176, max= 240, avg=196.00, stdev=13.61, samples=20 00:40:46.081 lat (usec) : 750=47.05%, 1000=4.28% 00:40:46.081 lat (msec) : 10=0.20%, 50=48.47% 00:40:46.081 cpu : usr=91.53%, sys=8.18%, ctx=13, majf=0, minf=288 00:40:46.081 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:46.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:46.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:46.081 issued rwts: total=1964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:46.081 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:46.081 00:40:46.082 Run status group 0 (all jobs): 00:40:46.082 READ: bw=785KiB/s (803kB/s), 785KiB/s-785KiB/s (803kB/s-803kB/s), io=7856KiB (8045kB), run=10013-10013msec 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.082 00:40:46.082 real 0m11.061s 00:40:46.082 user 0m10.397s 00:40:46.082 sys 0m1.108s 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:46.082 ************************************ 00:40:46.082 END TEST fio_dif_1_default 00:40:46.082 ************************************ 00:40:46.082 03:21:55 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:40:46.082 03:21:55 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:46.082 03:21:55 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:46.082 03:21:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:46.082 ************************************ 00:40:46.082 START TEST fio_dif_1_multi_subsystems 00:40:46.082 ************************************ 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:46.082 bdev_null0 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:46.082 [2024-11-19 03:21:55.154791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:46.082 bdev_null1 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:46.082 { 00:40:46.082 "params": { 00:40:46.082 "name": "Nvme$subsystem", 00:40:46.082 "trtype": "$TEST_TRANSPORT", 00:40:46.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:46.082 "adrfam": "ipv4", 00:40:46.082 "trsvcid": "$NVMF_PORT", 00:40:46.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:46.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:46.082 "hdgst": ${hdgst:-false}, 00:40:46.082 "ddgst": ${ddgst:-false} 00:40:46.082 }, 00:40:46.082 "method": "bdev_nvme_attach_controller" 00:40:46.082 } 00:40:46.082 EOF 00:40:46.082 )") 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:46.082 { 00:40:46.082 "params": { 00:40:46.082 "name": "Nvme$subsystem", 00:40:46.082 "trtype": "$TEST_TRANSPORT", 00:40:46.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:46.082 "adrfam": "ipv4", 00:40:46.082 "trsvcid": "$NVMF_PORT", 00:40:46.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:46.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:46.082 "hdgst": ${hdgst:-false}, 00:40:46.082 "ddgst": ${ddgst:-false} 00:40:46.082 }, 00:40:46.082 "method": "bdev_nvme_attach_controller" 00:40:46.082 } 00:40:46.082 EOF 00:40:46.082 )") 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:40:46.082 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:46.082 "params": { 00:40:46.082 "name": "Nvme0", 00:40:46.082 "trtype": "tcp", 00:40:46.082 "traddr": "10.0.0.2", 00:40:46.082 "adrfam": "ipv4", 00:40:46.082 "trsvcid": "4420", 00:40:46.082 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:46.082 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:46.082 "hdgst": false, 00:40:46.083 "ddgst": false 00:40:46.083 }, 00:40:46.083 "method": "bdev_nvme_attach_controller" 00:40:46.083 },{ 00:40:46.083 "params": { 00:40:46.083 "name": "Nvme1", 00:40:46.083 "trtype": "tcp", 00:40:46.083 "traddr": "10.0.0.2", 00:40:46.083 "adrfam": "ipv4", 00:40:46.083 "trsvcid": "4420", 00:40:46.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:46.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:46.083 "hdgst": false, 00:40:46.083 "ddgst": false 00:40:46.083 }, 00:40:46.083 "method": "bdev_nvme_attach_controller" 00:40:46.083 }' 00:40:46.083 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:46.083 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:46.083 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:46.083 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:46.083 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:46.083 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:46.083 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:46.083 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:46.083 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:46.083 03:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:46.083 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:46.083 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:46.083 fio-3.35 00:40:46.083 Starting 2 threads 00:40:56.049 00:40:56.049 filename0: (groupid=0, jobs=1): err= 0: pid=466725: Tue Nov 19 03:22:06 2024 00:40:56.049 read: IOPS=106, BW=425KiB/s (435kB/s)(4256KiB/10017msec) 00:40:56.049 slat (nsec): min=7047, max=42733, avg=10693.03, stdev=5118.56 00:40:56.049 clat (usec): min=568, max=46563, avg=37621.71, stdev=11390.40 00:40:56.049 lat (usec): min=575, max=46603, avg=37632.40, stdev=11390.21 00:40:56.049 clat percentiles (usec): 00:40:56.049 | 1.00th=[ 603], 5.00th=[ 652], 10.00th=[40633], 20.00th=[41157], 00:40:56.049 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:56.049 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:40:56.049 | 99.00th=[42730], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:40:56.049 | 99.99th=[46400] 00:40:56.049 bw ( KiB/s): min= 384, max= 480, per=33.77%, avg=424.42, stdev=33.46, samples=19 00:40:56.049 iops : min= 96, max= 120, avg=106.11, stdev= 8.37, samples=19 00:40:56.049 lat (usec) : 750=8.27%, 1000=0.38% 00:40:56.049 lat (msec) : 50=91.35% 00:40:56.049 cpu : usr=97.40%, sys=2.31%, ctx=17, majf=0, minf=44 00:40:56.049 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:56.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:56.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:56.049 issued rwts: total=1064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:56.049 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:56.049 filename1: (groupid=0, jobs=1): err= 0: pid=466726: Tue Nov 19 03:22:06 2024 00:40:56.049 read: IOPS=207, BW=832KiB/s (852kB/s)(8320KiB/10002msec) 00:40:56.049 slat (nsec): min=4133, max=45560, avg=10824.63, stdev=4496.14 00:40:56.049 clat (usec): min=518, max=48971, avg=19200.55, stdev=20256.61 00:40:56.049 lat (usec): min=526, max=48986, avg=19211.37, stdev=20256.15 00:40:56.049 clat percentiles (usec): 00:40:56.049 | 1.00th=[ 562], 5.00th=[ 594], 10.00th=[ 611], 20.00th=[ 635], 00:40:56.049 | 30.00th=[ 668], 40.00th=[ 693], 50.00th=[ 775], 60.00th=[41157], 00:40:56.049 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:56.049 | 99.00th=[42206], 99.50th=[42206], 99.90th=[49021], 99.95th=[49021], 00:40:56.049 | 99.99th=[49021] 00:40:56.049 bw ( KiB/s): min= 704, max= 1088, per=65.79%, avg=826.95, stdev=85.51, samples=19 00:40:56.049 iops : min= 176, max= 272, avg=206.74, stdev=21.38, samples=19 00:40:56.049 lat (usec) : 750=45.67%, 1000=8.17% 00:40:56.049 lat (msec) : 2=0.58%, 50=45.58% 00:40:56.049 cpu : usr=97.55%, sys=2.14%, ctx=19, majf=0, minf=194 00:40:56.049 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:56.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:56.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:56.049 issued rwts: total=2080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:56.049 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:56.049 00:40:56.049 Run status group 0 (all jobs): 00:40:56.049 READ: bw=1255KiB/s (1286kB/s), 425KiB/s-832KiB/s (435kB/s-852kB/s), io=12.3MiB (12.9MB), run=10002-10017msec 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:56.049 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.050 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:56.050 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.050 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:56.050 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.050 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:56.050 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.050 00:40:56.050 real 0m11.328s 00:40:56.050 user 0m20.884s 00:40:56.050 sys 0m0.735s 00:40:56.050 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:56.050 03:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:56.050 ************************************ 00:40:56.050 END TEST fio_dif_1_multi_subsystems 00:40:56.050 ************************************ 00:40:56.050 03:22:06 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:40:56.050 03:22:06 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:56.050 03:22:06 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:56.050 03:22:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:56.050 ************************************ 00:40:56.050 START TEST fio_dif_rand_params 00:40:56.050 ************************************ 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:56.050 bdev_null0 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:56.050 [2024-11-19 03:22:06.532953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:56.050 { 00:40:56.050 "params": { 00:40:56.050 "name": "Nvme$subsystem", 00:40:56.050 "trtype": "$TEST_TRANSPORT", 00:40:56.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:56.050 "adrfam": "ipv4", 00:40:56.050 "trsvcid": "$NVMF_PORT", 00:40:56.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:56.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:56.050 "hdgst": ${hdgst:-false}, 00:40:56.050 "ddgst": ${ddgst:-false} 00:40:56.050 }, 00:40:56.050 "method": "bdev_nvme_attach_controller" 00:40:56.050 } 00:40:56.050 EOF 00:40:56.050 )") 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:56.050 "params": { 00:40:56.050 "name": "Nvme0", 00:40:56.050 "trtype": "tcp", 00:40:56.050 "traddr": "10.0.0.2", 00:40:56.050 "adrfam": "ipv4", 00:40:56.050 "trsvcid": "4420", 00:40:56.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:56.050 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:56.050 "hdgst": false, 00:40:56.050 "ddgst": false 00:40:56.050 }, 00:40:56.050 "method": "bdev_nvme_attach_controller" 00:40:56.050 }' 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:56.050 03:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:56.309 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:56.309 ... 00:40:56.309 fio-3.35 00:40:56.309 Starting 3 threads 00:41:02.866 00:41:02.866 filename0: (groupid=0, jobs=1): err= 0: pid=468086: Tue Nov 19 03:22:12 2024 00:41:02.866 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(145MiB/5046msec) 00:41:02.866 slat (nsec): min=4337, max=64025, avg=16317.53, stdev=6011.62 00:41:02.866 clat (usec): min=4468, max=55984, avg=13002.17, stdev=6267.53 00:41:02.866 lat (usec): min=4481, max=56016, avg=13018.48, stdev=6267.76 00:41:02.866 clat percentiles (usec): 00:41:02.866 | 1.00th=[ 5145], 5.00th=[ 7046], 10.00th=[ 8225], 20.00th=[ 9110], 00:41:02.866 | 30.00th=[10028], 40.00th=[11731], 50.00th=[12911], 60.00th=[13698], 00:41:02.866 | 70.00th=[14615], 80.00th=[15401], 90.00th=[16188], 95.00th=[16909], 00:41:02.866 | 99.00th=[51643], 99.50th=[53740], 99.90th=[55313], 99.95th=[55837], 00:41:02.867 | 99.99th=[55837] 00:41:02.867 bw ( KiB/s): min=24576, max=37376, per=33.35%, avg=29619.20, stdev=3875.02, samples=10 00:41:02.867 iops : min= 192, max= 292, avg=231.40, stdev=30.27, samples=10 00:41:02.867 lat (msec) : 10=30.03%, 20=67.99%, 50=0.60%, 100=1.38% 00:41:02.867 cpu : usr=85.31%, sys=9.77%, ctx=594, majf=0, minf=73 00:41:02.867 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:02.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.867 issued rwts: total=1159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.867 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:02.867 filename0: (groupid=0, jobs=1): err= 0: pid=468087: Tue Nov 19 03:22:12 2024 00:41:02.867 read: IOPS=242, BW=30.4MiB/s (31.8MB/s)(152MiB/5005msec) 00:41:02.867 slat (nsec): min=4010, max=35120, avg=14192.35, stdev=2786.71 00:41:02.867 clat (usec): min=4232, max=53748, avg=12328.44, stdev=7496.85 00:41:02.867 lat (usec): min=4246, max=53761, avg=12342.63, stdev=7496.80 00:41:02.867 clat percentiles (usec): 00:41:02.867 | 1.00th=[ 5145], 5.00th=[ 7177], 10.00th=[ 7898], 20.00th=[ 8586], 00:41:02.867 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11600], 60.00th=[11994], 00:41:02.867 | 70.00th=[12387], 80.00th=[12911], 90.00th=[13698], 95.00th=[14615], 00:41:02.867 | 99.00th=[51643], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:41:02.867 | 99.99th=[53740] 00:41:02.867 bw ( KiB/s): min=22528, max=35072, per=34.96%, avg=31052.80, stdev=3960.51, samples=10 00:41:02.867 iops : min= 176, max= 274, avg=242.60, stdev=30.94, samples=10 00:41:02.867 lat (msec) : 10=27.80%, 20=68.75%, 50=1.56%, 100=1.89% 00:41:02.867 cpu : usr=93.09%, sys=6.35%, ctx=17, majf=0, minf=112 00:41:02.867 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:02.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.867 issued rwts: total=1216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.867 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:02.867 filename0: (groupid=0, jobs=1): err= 0: pid=468088: Tue Nov 19 03:22:12 2024 00:41:02.867 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(141MiB/5047msec) 00:41:02.867 slat (nsec): min=4328, max=39981, avg=15437.05, stdev=3559.77 00:41:02.867 clat (usec): min=5337, max=57304, avg=13376.94, stdev=9957.91 00:41:02.867 lat (usec): min=5345, max=57317, avg=13392.38, stdev=9957.73 00:41:02.867 clat percentiles (usec): 00:41:02.867 | 1.00th=[ 6915], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9896], 00:41:02.867 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11207], 60.00th=[11469], 00:41:02.867 | 70.00th=[11863], 80.00th=[12256], 90.00th=[13042], 95.00th=[50070], 00:41:02.867 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54264], 99.95th=[57410], 00:41:02.867 | 99.99th=[57410] 00:41:02.867 bw ( KiB/s): min=23808, max=34560, per=32.40%, avg=28774.40, stdev=3441.38, samples=10 00:41:02.867 iops : min= 186, max= 270, avg=224.80, stdev=26.89, samples=10 00:41:02.867 lat (msec) : 10=22.36%, 20=71.34%, 50=1.06%, 100=5.24% 00:41:02.867 cpu : usr=93.14%, sys=6.32%, ctx=9, majf=0, minf=83 00:41:02.867 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:02.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.867 issued rwts: total=1127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.867 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:02.867 00:41:02.867 Run status group 0 (all jobs): 00:41:02.867 READ: bw=86.7MiB/s (90.9MB/s), 27.9MiB/s-30.4MiB/s (29.3MB/s-31.8MB/s), io=438MiB (459MB), run=5005-5047msec 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.867 bdev_null0 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.867 [2024-11-19 03:22:12.617308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.867 bdev_null1 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.867 bdev_null2 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:02.867 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:02.868 { 00:41:02.868 "params": { 00:41:02.868 "name": "Nvme$subsystem", 00:41:02.868 "trtype": "$TEST_TRANSPORT", 00:41:02.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:02.868 "adrfam": "ipv4", 00:41:02.868 "trsvcid": "$NVMF_PORT", 00:41:02.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:02.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:02.868 "hdgst": ${hdgst:-false}, 00:41:02.868 "ddgst": ${ddgst:-false} 00:41:02.868 }, 00:41:02.868 "method": "bdev_nvme_attach_controller" 00:41:02.868 } 00:41:02.868 EOF 00:41:02.868 )") 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:02.868 { 00:41:02.868 "params": { 00:41:02.868 "name": "Nvme$subsystem", 00:41:02.868 "trtype": "$TEST_TRANSPORT", 00:41:02.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:02.868 "adrfam": "ipv4", 00:41:02.868 "trsvcid": "$NVMF_PORT", 00:41:02.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:02.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:02.868 "hdgst": ${hdgst:-false}, 00:41:02.868 "ddgst": ${ddgst:-false} 00:41:02.868 }, 00:41:02.868 "method": "bdev_nvme_attach_controller" 00:41:02.868 } 00:41:02.868 EOF 00:41:02.868 )") 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:02.868 { 00:41:02.868 "params": { 00:41:02.868 "name": "Nvme$subsystem", 00:41:02.868 "trtype": "$TEST_TRANSPORT", 00:41:02.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:02.868 "adrfam": "ipv4", 00:41:02.868 "trsvcid": "$NVMF_PORT", 00:41:02.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:02.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:02.868 "hdgst": ${hdgst:-false}, 00:41:02.868 "ddgst": ${ddgst:-false} 00:41:02.868 }, 00:41:02.868 "method": "bdev_nvme_attach_controller" 00:41:02.868 } 00:41:02.868 EOF 00:41:02.868 )") 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:02.868 "params": { 00:41:02.868 "name": "Nvme0", 00:41:02.868 "trtype": "tcp", 00:41:02.868 "traddr": "10.0.0.2", 00:41:02.868 "adrfam": "ipv4", 00:41:02.868 "trsvcid": "4420", 00:41:02.868 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:02.868 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:02.868 "hdgst": false, 00:41:02.868 "ddgst": false 00:41:02.868 }, 00:41:02.868 "method": "bdev_nvme_attach_controller" 00:41:02.868 },{ 00:41:02.868 "params": { 00:41:02.868 "name": "Nvme1", 00:41:02.868 "trtype": "tcp", 00:41:02.868 "traddr": "10.0.0.2", 00:41:02.868 "adrfam": "ipv4", 00:41:02.868 "trsvcid": "4420", 00:41:02.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:02.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:02.868 "hdgst": false, 00:41:02.868 "ddgst": false 00:41:02.868 }, 00:41:02.868 "method": "bdev_nvme_attach_controller" 00:41:02.868 },{ 00:41:02.868 "params": { 00:41:02.868 "name": "Nvme2", 00:41:02.868 "trtype": "tcp", 00:41:02.868 "traddr": "10.0.0.2", 00:41:02.868 "adrfam": "ipv4", 00:41:02.868 "trsvcid": "4420", 00:41:02.868 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:02.868 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:02.868 "hdgst": false, 00:41:02.868 "ddgst": false 00:41:02.868 }, 00:41:02.868 "method": "bdev_nvme_attach_controller" 00:41:02.868 }' 00:41:02.868 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:02.869 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:02.869 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:02.869 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:02.869 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:02.869 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:02.869 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:02.869 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:02.869 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:02.869 03:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:02.869 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:02.869 ... 00:41:02.869 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:02.869 ... 00:41:02.869 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:02.869 ... 00:41:02.869 fio-3.35 00:41:02.869 Starting 24 threads 00:41:15.073 00:41:15.073 filename0: (groupid=0, jobs=1): err= 0: pid=468867: Tue Nov 19 03:22:24 2024 00:41:15.073 read: IOPS=419, BW=1680KiB/s (1720kB/s)(16.4MiB/10020msec) 00:41:15.073 slat (usec): min=8, max=102, avg=40.76, stdev=16.76 00:41:15.073 clat (usec): min=14465, max=45324, avg=37761.85, stdev=5143.47 00:41:15.073 lat (usec): min=14540, max=45344, avg=37802.60, stdev=5144.74 00:41:15.073 clat percentiles (usec): 00:41:15.073 | 1.00th=[23462], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:41:15.073 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[42730], 00:41:15.073 | 70.00th=[42730], 80.00th=[43254], 90.00th=[43254], 95.00th=[43779], 00:41:15.073 | 99.00th=[43779], 99.50th=[44827], 99.90th=[45351], 99.95th=[45351], 00:41:15.073 | 99.99th=[45351] 00:41:15.073 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1676.80, stdev=198.73, samples=20 00:41:15.073 iops : min= 352, max= 480, avg=419.20, stdev=49.68, samples=20 00:41:15.073 lat (msec) : 20=0.76%, 50=99.24% 00:41:15.073 cpu : usr=97.65%, sys=1.70%, ctx=64, majf=0, minf=9 00:41:15.073 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:15.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.073 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.073 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.074 filename0: (groupid=0, jobs=1): err= 0: pid=468868: Tue Nov 19 03:22:24 2024 00:41:15.074 read: IOPS=417, BW=1668KiB/s (1708kB/s)(16.3MiB/10013msec) 00:41:15.074 slat (usec): min=8, max=100, avg=36.84, stdev=20.29 00:41:15.074 clat (usec): min=15488, max=63267, avg=38051.96, stdev=5209.32 00:41:15.074 lat (usec): min=15523, max=63289, avg=38088.81, stdev=5198.94 00:41:15.074 clat percentiles (usec): 00:41:15.074 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:41:15.074 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[42730], 00:41:15.074 | 70.00th=[43254], 80.00th=[43254], 90.00th=[43254], 95.00th=[43779], 00:41:15.074 | 99.00th=[43779], 99.50th=[43779], 99.90th=[63177], 99.95th=[63177], 00:41:15.074 | 99.99th=[63177] 00:41:15.074 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1670.89, stdev=211.07, samples=19 00:41:15.074 iops : min= 352, max= 480, avg=417.68, stdev=52.77, samples=19 00:41:15.074 lat (msec) : 20=0.38%, 50=99.23%, 100=0.38% 00:41:15.074 cpu : usr=98.18%, sys=1.25%, ctx=84, majf=0, minf=9 00:41:15.074 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:15.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.074 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.074 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.074 filename0: (groupid=0, jobs=1): err= 0: pid=468869: Tue Nov 19 03:22:24 2024 00:41:15.074 read: IOPS=417, BW=1668KiB/s (1708kB/s)(16.3MiB/10013msec) 00:41:15.074 slat (nsec): min=8111, max=94742, avg=44046.43, stdev=14957.85 00:41:15.074 clat (usec): min=19954, max=57874, avg=37933.71, stdev=4973.92 00:41:15.074 lat (usec): min=19966, max=57898, avg=37977.75, stdev=4973.70 00:41:15.074 clat percentiles (usec): 00:41:15.074 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:41:15.074 | 30.00th=[33424], 40.00th=[33817], 50.00th=[34341], 60.00th=[42730], 00:41:15.074 | 70.00th=[42730], 80.00th=[43254], 90.00th=[43254], 95.00th=[43254], 00:41:15.074 | 99.00th=[44303], 99.50th=[45351], 99.90th=[57934], 99.95th=[57934], 00:41:15.074 | 99.99th=[57934] 00:41:15.074 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1670.74, stdev=202.27, samples=19 00:41:15.074 iops : min= 352, max= 480, avg=417.68, stdev=50.57, samples=19 00:41:15.074 lat (msec) : 20=0.12%, 50=99.50%, 100=0.38% 00:41:15.074 cpu : usr=98.33%, sys=1.26%, ctx=28, majf=0, minf=9 00:41:15.074 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:15.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.074 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.074 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.074 filename0: (groupid=0, jobs=1): err= 0: pid=468870: Tue Nov 19 03:22:24 2024 00:41:15.074 read: IOPS=417, BW=1670KiB/s (1710kB/s)(16.3MiB/10005msec) 00:41:15.074 slat (usec): min=9, max=131, avg=44.93, stdev=15.59 00:41:15.074 clat (usec): min=32645, max=50286, avg=37900.20, stdev=4716.77 00:41:15.074 lat (usec): min=32665, max=50341, avg=37945.14, stdev=4716.59 00:41:15.074 clat percentiles (usec): 00:41:15.074 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:41:15.074 | 30.00th=[33424], 40.00th=[33817], 50.00th=[34341], 60.00th=[42730], 00:41:15.074 | 70.00th=[42730], 80.00th=[43254], 90.00th=[43254], 95.00th=[43254], 00:41:15.074 | 99.00th=[43779], 99.50th=[44827], 99.90th=[45351], 99.95th=[45351], 00:41:15.074 | 99.99th=[50070] 00:41:15.074 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1677.47, stdev=212.88, samples=19 00:41:15.074 iops : min= 352, max= 480, avg=419.37, stdev=53.22, samples=19 00:41:15.074 lat (msec) : 50=99.95%, 100=0.05% 00:41:15.074 cpu : usr=98.53%, sys=1.06%, ctx=14, majf=0, minf=9 00:41:15.074 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:15.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.074 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.074 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.074 filename0: (groupid=0, jobs=1): err= 0: pid=468871: Tue Nov 19 03:22:24 2024 00:41:15.074 read: IOPS=417, BW=1668KiB/s (1708kB/s)(16.3MiB/10012msec) 00:41:15.074 slat (nsec): min=7949, max=63928, avg=31527.48, stdev=9750.62 00:41:15.074 clat (usec): min=14962, max=81909, avg=38082.84, stdev=5616.89 00:41:15.074 lat (usec): min=14975, max=81928, avg=38114.37, stdev=5617.65 00:41:15.074 clat percentiles (usec): 00:41:15.074 | 1.00th=[16188], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:41:15.074 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34866], 60.00th=[42730], 00:41:15.074 | 70.00th=[43254], 80.00th=[43254], 90.00th=[43254], 95.00th=[43779], 00:41:15.074 | 99.00th=[51643], 99.50th=[53740], 99.90th=[62653], 99.95th=[62653], 00:41:15.074 | 99.99th=[82314] 00:41:15.074 bw ( KiB/s): min= 1408, max= 1936, per=4.17%, avg=1670.53, stdev=203.49, samples=19 00:41:15.074 iops : min= 352, max= 484, avg=417.63, stdev=50.87, samples=19 00:41:15.074 lat (msec) : 20=1.13%, 50=97.80%, 100=1.08% 00:41:15.074 cpu : usr=98.40%, sys=1.20%, ctx=13, majf=0, minf=9 00:41:15.074 IO depths : 1=5.2%, 2=11.4%, 4=24.8%, 8=51.3%, 16=7.3%, 32=0.0%, >=64=0.0% 00:41:15.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.074 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.074 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.074 filename0: (groupid=0, jobs=1): err= 0: pid=468872: Tue Nov 19 03:22:24 2024 00:41:15.074 read: IOPS=418, BW=1673KiB/s (1713kB/s)(16.4MiB/10021msec) 00:41:15.074 slat (usec): min=8, max=114, avg=25.64, stdev=17.04 00:41:15.074 clat (usec): min=24505, max=45183, avg=38010.69, stdev=4821.75 00:41:15.074 lat (usec): min=24522, max=45205, avg=38036.34, stdev=4826.15 00:41:15.074 clat percentiles (usec): 00:41:15.074 | 1.00th=[26346], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:41:15.074 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34866], 60.00th=[42730], 00:41:15.074 | 70.00th=[43254], 80.00th=[43254], 90.00th=[43254], 95.00th=[43779], 00:41:15.074 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:41:15.074 | 99.99th=[45351] 00:41:15.074 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1670.40, stdev=209.61, samples=20 00:41:15.074 iops : min= 352, max= 480, avg=417.60, stdev=52.40, samples=20 00:41:15.074 lat (msec) : 50=100.00% 00:41:15.074 cpu : usr=97.99%, sys=1.42%, ctx=100, majf=0, minf=9 00:41:15.074 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:41:15.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.074 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.074 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.074 filename0: (groupid=0, jobs=1): err= 0: pid=468873: Tue Nov 19 03:22:24 2024 00:41:15.074 read: IOPS=419, BW=1680KiB/s (1720kB/s)(16.4MiB/10021msec) 00:41:15.074 slat (usec): min=8, max=171, avg=80.99, stdev=15.51 00:41:15.074 clat (usec): min=14038, max=45265, avg=37365.52, stdev=5148.86 00:41:15.074 lat (usec): min=14058, max=45347, avg=37446.50, stdev=5150.69 00:41:15.074 clat percentiles (usec): 00:41:15.074 | 1.00th=[22676], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:41:15.074 | 30.00th=[33162], 40.00th=[33817], 50.00th=[34341], 60.00th=[42206], 00:41:15.074 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:41:15.074 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44827], 99.95th=[45351], 00:41:15.074 | 99.99th=[45351] 00:41:15.074 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1676.80, stdev=198.73, samples=20 00:41:15.074 iops : min= 352, max= 480, avg=419.20, stdev=49.68, samples=20 00:41:15.074 lat (msec) : 20=0.76%, 50=99.24% 00:41:15.074 cpu : usr=98.09%, sys=1.44%, ctx=13, majf=0, minf=9 00:41:15.074 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:15.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.074 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.074 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.074 filename0: (groupid=0, jobs=1): err= 0: pid=468874: Tue Nov 19 03:22:24 2024 00:41:15.074 read: IOPS=419, BW=1678KiB/s (1718kB/s)(16.4MiB/10032msec) 00:41:15.074 slat (nsec): min=5205, max=94504, avg=39394.99, stdev=15072.33 00:41:15.074 clat (usec): min=16754, max=46203, avg=37835.69, stdev=5014.36 00:41:15.074 lat (usec): min=16791, max=46268, avg=37875.09, stdev=5016.05 00:41:15.074 clat percentiles (usec): 00:41:15.074 | 1.00th=[25822], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:41:15.074 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[42730], 00:41:15.074 | 70.00th=[43254], 80.00th=[43254], 90.00th=[43254], 95.00th=[43779], 00:41:15.074 | 99.00th=[43779], 99.50th=[44827], 99.90th=[45351], 99.95th=[45351], 00:41:15.074 | 99.99th=[46400] 00:41:15.074 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1676.80, stdev=215.39, samples=20 00:41:15.074 iops : min= 352, max= 480, avg=419.20, stdev=53.85, samples=20 00:41:15.074 lat (msec) : 20=0.21%, 50=99.79% 00:41:15.074 cpu : usr=98.31%, sys=1.28%, ctx=17, majf=0, minf=9 00:41:15.074 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:41:15.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.074 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.074 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.074 filename1: (groupid=0, jobs=1): err= 0: pid=468875: Tue Nov 19 03:22:24 2024 00:41:15.074 read: IOPS=418, BW=1674KiB/s (1714kB/s)(16.4MiB/10019msec) 00:41:15.074 slat (nsec): min=5226, max=97228, avg=45205.32, stdev=14656.02 00:41:15.074 clat (usec): min=22792, max=45388, avg=37850.37, stdev=4808.21 00:41:15.074 lat (usec): min=22841, max=45407, avg=37895.57, stdev=4808.10 00:41:15.074 clat percentiles (usec): 00:41:15.074 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:41:15.074 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[42730], 00:41:15.075 | 70.00th=[42730], 80.00th=[43254], 90.00th=[43254], 95.00th=[43254], 00:41:15.075 | 99.00th=[43779], 99.50th=[44827], 99.90th=[45351], 99.95th=[45351], 00:41:15.075 | 99.99th=[45351] 00:41:15.075 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1670.40, stdev=192.45, samples=20 00:41:15.075 iops : min= 352, max= 480, avg=417.60, stdev=48.11, samples=20 00:41:15.075 lat (msec) : 50=100.00% 00:41:15.075 cpu : usr=98.26%, sys=1.26%, ctx=52, majf=0, minf=9 00:41:15.075 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:15.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.075 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.075 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.075 filename1: (groupid=0, jobs=1): err= 0: pid=468876: Tue Nov 19 03:22:24 2024 00:41:15.075 read: IOPS=418, BW=1674KiB/s (1715kB/s)(16.4MiB/10014msec) 00:41:15.075 slat (usec): min=7, max=108, avg=30.75, stdev=24.80 00:41:15.075 clat (usec): min=13821, max=49258, avg=37929.00, stdev=5123.36 00:41:15.075 lat (usec): min=13830, max=49277, avg=37959.76, stdev=5111.45 00:41:15.075 clat percentiles (usec): 00:41:15.075 | 1.00th=[32113], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:41:15.075 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[43254], 00:41:15.075 | 70.00th=[43254], 80.00th=[43254], 90.00th=[43254], 95.00th=[43779], 00:41:15.075 | 99.00th=[43779], 99.50th=[43779], 99.90th=[49021], 99.95th=[49021], 00:41:15.075 | 99.99th=[49021] 00:41:15.075 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1677.47, stdev=185.46, samples=19 00:41:15.075 iops : min= 352, max= 480, avg=419.37, stdev=46.37, samples=19 00:41:15.075 lat (msec) : 20=0.38%, 50=99.62% 00:41:15.075 cpu : usr=97.78%, sys=1.54%, ctx=38, majf=0, minf=9 00:41:15.075 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:15.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.075 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.075 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.075 filename1: (groupid=0, jobs=1): err= 0: pid=468877: Tue Nov 19 03:22:24 2024 00:41:15.075 read: IOPS=417, BW=1668KiB/s (1708kB/s)(16.3MiB/10012msec) 00:41:15.075 slat (usec): min=8, max=103, avg=37.89, stdev=14.18 00:41:15.075 clat (usec): min=15349, max=62840, avg=38030.88, stdev=5167.81 00:41:15.075 lat (usec): min=15359, max=62869, avg=38068.77, stdev=5163.99 00:41:15.075 clat percentiles (usec): 00:41:15.075 | 1.00th=[32637], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:41:15.075 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34866], 60.00th=[42730], 00:41:15.075 | 70.00th=[43254], 80.00th=[43254], 90.00th=[43254], 95.00th=[43779], 00:41:15.075 | 99.00th=[43779], 99.50th=[43779], 99.90th=[62653], 99.95th=[62653], 00:41:15.075 | 99.99th=[62653] 00:41:15.075 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1670.74, stdev=202.27, samples=19 00:41:15.075 iops : min= 352, max= 480, avg=417.68, stdev=50.57, samples=19 00:41:15.075 lat (msec) : 20=0.43%, 50=99.14%, 100=0.43% 00:41:15.075 cpu : usr=98.52%, sys=1.08%, ctx=17, majf=0, minf=9 00:41:15.075 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:15.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.075 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.075 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.075 filename1: (groupid=0, jobs=1): err= 0: pid=468878: Tue Nov 19 03:22:24 2024 00:41:15.075 read: IOPS=417, BW=1670KiB/s (1710kB/s)(16.3MiB/10004msec) 00:41:15.075 slat (usec): min=14, max=139, avg=49.07, stdev=16.06 00:41:15.075 clat (usec): min=32319, max=45287, avg=37871.93, stdev=4731.72 00:41:15.075 lat (usec): min=32387, max=45350, avg=37921.00, stdev=4728.51 00:41:15.075 clat percentiles (usec): 00:41:15.075 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:41:15.075 | 30.00th=[33424], 40.00th=[33817], 50.00th=[34341], 60.00th=[42730], 00:41:15.075 | 70.00th=[42730], 80.00th=[43254], 90.00th=[43254], 95.00th=[43254], 00:41:15.075 | 99.00th=[43779], 99.50th=[44827], 99.90th=[44827], 99.95th=[45351], 00:41:15.075 | 99.99th=[45351] 00:41:15.075 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1677.63, stdev=212.97, samples=19 00:41:15.075 iops : min= 352, max= 480, avg=419.37, stdev=53.22, samples=19 00:41:15.075 lat (msec) : 50=100.00% 00:41:15.075 cpu : usr=97.71%, sys=1.51%, ctx=126, majf=0, minf=9 00:41:15.075 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:15.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.075 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.075 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.075 filename1: (groupid=0, jobs=1): err= 0: pid=468879: Tue Nov 19 03:22:24 2024 00:41:15.075 read: IOPS=417, BW=1670KiB/s (1710kB/s)(16.3MiB/10004msec) 00:41:15.075 slat (usec): min=14, max=100, avg=44.48, stdev=13.92 00:41:15.075 clat (usec): min=32645, max=45383, avg=37936.15, stdev=4702.79 00:41:15.075 lat (usec): min=32682, max=45412, avg=37980.63, stdev=4702.58 00:41:15.075 clat percentiles (usec): 00:41:15.075 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:41:15.075 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[42730], 00:41:15.075 | 70.00th=[42730], 80.00th=[43254], 90.00th=[43254], 95.00th=[43254], 00:41:15.075 | 99.00th=[43779], 99.50th=[44827], 99.90th=[45351], 99.95th=[45351], 00:41:15.075 | 99.99th=[45351] 00:41:15.075 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1677.47, stdev=212.88, samples=19 00:41:15.075 iops : min= 352, max= 480, avg=419.37, stdev=53.22, samples=19 00:41:15.075 lat (msec) : 50=100.00% 00:41:15.075 cpu : usr=98.28%, sys=1.31%, ctx=13, majf=0, minf=9 00:41:15.075 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:15.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.075 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.075 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.075 filename1: (groupid=0, jobs=1): err= 0: pid=468880: Tue Nov 19 03:22:24 2024 00:41:15.075 read: IOPS=418, BW=1674KiB/s (1714kB/s)(16.4MiB/10018msec) 00:41:15.075 slat (nsec): min=5514, max=95708, avg=37134.62, stdev=18328.19 00:41:15.075 clat (usec): min=22863, max=45345, avg=37952.60, stdev=4808.28 00:41:15.075 lat (usec): min=22896, max=45368, avg=37989.73, stdev=4808.81 00:41:15.075 clat percentiles (usec): 00:41:15.075 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:41:15.075 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[42730], 00:41:15.075 | 70.00th=[43254], 80.00th=[43254], 90.00th=[43254], 95.00th=[43779], 00:41:15.075 | 99.00th=[43779], 99.50th=[44827], 99.90th=[45351], 99.95th=[45351], 00:41:15.075 | 99.99th=[45351] 00:41:15.075 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1670.55, stdev=192.55, samples=20 00:41:15.075 iops : min= 352, max= 480, avg=417.60, stdev=48.11, samples=20 00:41:15.075 lat (msec) : 50=100.00% 00:41:15.075 cpu : usr=98.43%, sys=1.16%, ctx=13, majf=0, minf=9 00:41:15.075 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:15.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.075 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.075 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.075 filename1: (groupid=0, jobs=1): err= 0: pid=468881: Tue Nov 19 03:22:24 2024 00:41:15.075 read: IOPS=421, BW=1685KiB/s (1725kB/s)(16.5MiB/10028msec) 00:41:15.075 slat (usec): min=7, max=193, avg=26.08, stdev=27.99 00:41:15.075 clat (usec): min=13366, max=44007, avg=37748.78, stdev=5378.87 00:41:15.075 lat (usec): min=13378, max=44023, avg=37774.85, stdev=5366.22 00:41:15.075 clat percentiles (usec): 00:41:15.075 | 1.00th=[22414], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:41:15.075 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[43254], 00:41:15.075 | 70.00th=[43254], 80.00th=[43254], 90.00th=[43779], 95.00th=[43779], 00:41:15.075 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:41:15.075 | 99.99th=[43779] 00:41:15.075 bw ( KiB/s): min= 1408, max= 1920, per=4.20%, avg=1683.20, stdev=204.61, samples=20 00:41:15.075 iops : min= 352, max= 480, avg=420.80, stdev=51.15, samples=20 00:41:15.075 lat (msec) : 20=0.76%, 50=99.24% 00:41:15.075 cpu : usr=98.35%, sys=1.24%, ctx=8, majf=0, minf=9 00:41:15.075 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:15.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.075 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.075 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.075 filename1: (groupid=0, jobs=1): err= 0: pid=468882: Tue Nov 19 03:22:24 2024 00:41:15.075 read: IOPS=416, BW=1667KiB/s (1707kB/s)(16.3MiB/10013msec) 00:41:15.075 slat (usec): min=8, max=102, avg=34.10, stdev=14.75 00:41:15.075 clat (usec): min=15726, max=63275, avg=38114.17, stdev=5094.72 00:41:15.075 lat (usec): min=15762, max=63291, avg=38148.27, stdev=5096.68 00:41:15.075 clat percentiles (usec): 00:41:15.075 | 1.00th=[33162], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:41:15.075 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34866], 60.00th=[42730], 00:41:15.075 | 70.00th=[43254], 80.00th=[43254], 90.00th=[43254], 95.00th=[43779], 00:41:15.075 | 99.00th=[43779], 99.50th=[44827], 99.90th=[63177], 99.95th=[63177], 00:41:15.075 | 99.99th=[63177] 00:41:15.075 bw ( KiB/s): min= 1424, max= 1920, per=4.17%, avg=1670.89, stdev=204.36, samples=19 00:41:15.075 iops : min= 356, max= 480, avg=417.68, stdev=51.09, samples=19 00:41:15.075 lat (msec) : 20=0.34%, 50=99.23%, 100=0.43% 00:41:15.075 cpu : usr=97.68%, sys=1.52%, ctx=83, majf=0, minf=11 00:41:15.075 IO depths : 1=0.6%, 2=6.8%, 4=25.0%, 8=55.7%, 16=11.9%, 32=0.0%, >=64=0.0% 00:41:15.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.075 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.075 issued rwts: total=4174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.076 filename2: (groupid=0, jobs=1): err= 0: pid=468883: Tue Nov 19 03:22:24 2024 00:41:15.076 read: IOPS=417, BW=1668KiB/s (1708kB/s)(16.3MiB/10012msec) 00:41:15.076 slat (nsec): min=8386, max=66169, avg=31463.73, stdev=9024.46 00:41:15.076 clat (usec): min=14274, max=62785, avg=38130.08, stdev=5158.50 00:41:15.076 lat (usec): min=14286, max=62805, avg=38161.54, stdev=5158.13 00:41:15.076 clat percentiles (usec): 00:41:15.076 | 1.00th=[33162], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:41:15.076 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34866], 60.00th=[42730], 00:41:15.076 | 70.00th=[43254], 80.00th=[43254], 90.00th=[43254], 95.00th=[43779], 00:41:15.076 | 99.00th=[43779], 99.50th=[43779], 99.90th=[62653], 99.95th=[62653], 00:41:15.076 | 99.99th=[62653] 00:41:15.076 bw ( KiB/s): min= 1424, max= 1920, per=4.17%, avg=1670.89, stdev=204.36, samples=19 00:41:15.076 iops : min= 356, max= 480, avg=417.68, stdev=51.09, samples=19 00:41:15.076 lat (msec) : 20=0.43%, 50=99.09%, 100=0.48% 00:41:15.076 cpu : usr=96.03%, sys=2.59%, ctx=639, majf=0, minf=9 00:41:15.076 IO depths : 1=0.6%, 2=6.9%, 4=25.0%, 8=55.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:41:15.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.076 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.076 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.076 filename2: (groupid=0, jobs=1): err= 0: pid=468884: Tue Nov 19 03:22:24 2024 00:41:15.076 read: IOPS=418, BW=1673KiB/s (1713kB/s)(16.4MiB/10021msec) 00:41:15.076 slat (nsec): min=7920, max=53812, avg=19760.64, stdev=9656.90 00:41:15.076 clat (usec): min=24517, max=43964, avg=38066.21, stdev=4810.02 00:41:15.076 lat (usec): min=24541, max=43982, avg=38085.97, stdev=4810.14 00:41:15.076 clat percentiles (usec): 00:41:15.076 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:41:15.076 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34866], 60.00th=[43254], 00:41:15.076 | 70.00th=[43254], 80.00th=[43254], 90.00th=[43254], 95.00th=[43779], 00:41:15.076 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:41:15.076 | 99.99th=[43779] 00:41:15.076 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1670.40, stdev=209.61, samples=20 00:41:15.076 iops : min= 352, max= 480, avg=417.60, stdev=52.40, samples=20 00:41:15.076 lat (msec) : 50=100.00% 00:41:15.076 cpu : usr=98.30%, sys=1.26%, ctx=13, majf=0, minf=9 00:41:15.076 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:15.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.076 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.076 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.076 filename2: (groupid=0, jobs=1): err= 0: pid=468885: Tue Nov 19 03:22:24 2024 00:41:15.076 read: IOPS=418, BW=1672KiB/s (1712kB/s)(16.4MiB/10014msec) 00:41:15.076 slat (usec): min=15, max=111, avg=74.59, stdev= 9.66 00:41:15.076 clat (usec): min=13850, max=49895, avg=37591.22, stdev=4948.72 00:41:15.076 lat (usec): min=13874, max=49921, avg=37665.81, stdev=4949.70 00:41:15.076 clat percentiles (usec): 00:41:15.076 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[33162], 00:41:15.076 | 30.00th=[33424], 40.00th=[33817], 50.00th=[34341], 60.00th=[42206], 00:41:15.076 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:41:15.076 | 99.00th=[43779], 99.50th=[43779], 99.90th=[50070], 99.95th=[50070], 00:41:15.076 | 99.99th=[50070] 00:41:15.076 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1677.47, stdev=185.46, samples=19 00:41:15.076 iops : min= 352, max= 480, avg=419.37, stdev=46.37, samples=19 00:41:15.076 lat (msec) : 20=0.24%, 50=99.76% 00:41:15.076 cpu : usr=98.58%, sys=0.95%, ctx=14, majf=0, minf=9 00:41:15.076 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:15.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.076 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.076 issued rwts: total=4186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.076 filename2: (groupid=0, jobs=1): err= 0: pid=468886: Tue Nov 19 03:22:24 2024 00:41:15.076 read: IOPS=419, BW=1680KiB/s (1720kB/s)(16.4MiB/10020msec) 00:41:15.076 slat (usec): min=8, max=102, avg=38.29, stdev=18.36 00:41:15.076 clat (usec): min=13541, max=45244, avg=37814.52, stdev=5202.81 00:41:15.076 lat (usec): min=13568, max=45274, avg=37852.80, stdev=5200.67 00:41:15.076 clat percentiles (usec): 00:41:15.076 | 1.00th=[23462], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:41:15.076 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[42730], 00:41:15.076 | 70.00th=[43254], 80.00th=[43254], 90.00th=[43254], 95.00th=[43779], 00:41:15.076 | 99.00th=[43779], 99.50th=[44303], 99.90th=[45351], 99.95th=[45351], 00:41:15.076 | 99.99th=[45351] 00:41:15.076 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1676.80, stdev=198.73, samples=20 00:41:15.076 iops : min= 352, max= 480, avg=419.20, stdev=49.68, samples=20 00:41:15.076 lat (msec) : 20=0.76%, 50=99.24% 00:41:15.076 cpu : usr=96.41%, sys=2.11%, ctx=229, majf=0, minf=9 00:41:15.076 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:41:15.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.076 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.076 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.076 filename2: (groupid=0, jobs=1): err= 0: pid=468887: Tue Nov 19 03:22:24 2024 00:41:15.076 read: IOPS=421, BW=1685KiB/s (1725kB/s)(16.5MiB/10028msec) 00:41:15.076 slat (usec): min=7, max=209, avg=28.70, stdev=29.00 00:41:15.076 clat (usec): min=13202, max=43984, avg=37723.15, stdev=5386.49 00:41:15.076 lat (usec): min=13213, max=44001, avg=37751.85, stdev=5371.80 00:41:15.076 clat percentiles (usec): 00:41:15.076 | 1.00th=[22414], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:41:15.076 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[43254], 00:41:15.076 | 70.00th=[43254], 80.00th=[43254], 90.00th=[43779], 95.00th=[43779], 00:41:15.076 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:41:15.076 | 99.99th=[43779] 00:41:15.076 bw ( KiB/s): min= 1408, max= 1920, per=4.20%, avg=1683.20, stdev=191.55, samples=20 00:41:15.076 iops : min= 352, max= 480, avg=420.80, stdev=47.89, samples=20 00:41:15.076 lat (msec) : 20=0.76%, 50=99.24% 00:41:15.076 cpu : usr=98.29%, sys=1.29%, ctx=14, majf=0, minf=9 00:41:15.076 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:15.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.076 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.076 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.076 filename2: (groupid=0, jobs=1): err= 0: pid=468888: Tue Nov 19 03:22:24 2024 00:41:15.076 read: IOPS=417, BW=1668KiB/s (1708kB/s)(16.3MiB/10012msec) 00:41:15.076 slat (nsec): min=8906, max=99360, avg=33272.36, stdev=10780.72 00:41:15.076 clat (usec): min=15406, max=82017, avg=38075.44, stdev=5200.60 00:41:15.076 lat (usec): min=15453, max=82044, avg=38108.72, stdev=5198.96 00:41:15.076 clat percentiles (usec): 00:41:15.076 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:41:15.076 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34866], 60.00th=[42730], 00:41:15.076 | 70.00th=[43254], 80.00th=[43254], 90.00th=[43254], 95.00th=[43779], 00:41:15.076 | 99.00th=[43779], 99.50th=[43779], 99.90th=[62653], 99.95th=[62653], 00:41:15.076 | 99.99th=[82314] 00:41:15.076 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1670.89, stdev=211.07, samples=19 00:41:15.076 iops : min= 352, max= 480, avg=417.68, stdev=52.77, samples=19 00:41:15.076 lat (msec) : 20=0.43%, 50=99.19%, 100=0.38% 00:41:15.076 cpu : usr=96.87%, sys=1.98%, ctx=230, majf=0, minf=9 00:41:15.076 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:15.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.076 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.076 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.076 filename2: (groupid=0, jobs=1): err= 0: pid=468889: Tue Nov 19 03:22:24 2024 00:41:15.076 read: IOPS=418, BW=1673KiB/s (1714kB/s)(16.4MiB/10020msec) 00:41:15.076 slat (usec): min=13, max=101, avg=43.53, stdev=12.79 00:41:15.076 clat (usec): min=22677, max=45373, avg=37863.83, stdev=4804.93 00:41:15.076 lat (usec): min=22733, max=45409, avg=37907.37, stdev=4804.50 00:41:15.076 clat percentiles (usec): 00:41:15.076 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:41:15.076 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[42730], 00:41:15.076 | 70.00th=[42730], 80.00th=[43254], 90.00th=[43254], 95.00th=[43254], 00:41:15.076 | 99.00th=[43779], 99.50th=[44827], 99.90th=[45351], 99.95th=[45351], 00:41:15.076 | 99.99th=[45351] 00:41:15.076 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1670.40, stdev=192.45, samples=20 00:41:15.076 iops : min= 352, max= 480, avg=417.60, stdev=48.11, samples=20 00:41:15.076 lat (msec) : 50=100.00% 00:41:15.076 cpu : usr=98.37%, sys=1.18%, ctx=21, majf=0, minf=9 00:41:15.076 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:15.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.076 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.076 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.076 filename2: (groupid=0, jobs=1): err= 0: pid=468890: Tue Nov 19 03:22:24 2024 00:41:15.076 read: IOPS=417, BW=1670KiB/s (1710kB/s)(16.3MiB/10005msec) 00:41:15.076 slat (usec): min=12, max=125, avg=44.52, stdev=14.91 00:41:15.076 clat (usec): min=32690, max=45304, avg=37906.58, stdev=4710.92 00:41:15.076 lat (usec): min=32714, max=45368, avg=37951.11, stdev=4710.80 00:41:15.076 clat percentiles (usec): 00:41:15.076 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:41:15.076 | 30.00th=[33424], 40.00th=[33817], 50.00th=[34341], 60.00th=[42730], 00:41:15.076 | 70.00th=[42730], 80.00th=[43254], 90.00th=[43254], 95.00th=[43254], 00:41:15.076 | 99.00th=[43779], 99.50th=[44827], 99.90th=[44827], 99.95th=[45351], 00:41:15.076 | 99.99th=[45351] 00:41:15.077 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1677.47, stdev=212.88, samples=19 00:41:15.077 iops : min= 352, max= 480, avg=419.37, stdev=53.22, samples=19 00:41:15.077 lat (msec) : 50=100.00% 00:41:15.077 cpu : usr=97.71%, sys=1.58%, ctx=88, majf=0, minf=9 00:41:15.077 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:15.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.077 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.077 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.077 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:15.077 00:41:15.077 Run status group 0 (all jobs): 00:41:15.077 READ: bw=39.2MiB/s (41.1MB/s), 1667KiB/s-1685KiB/s (1707kB/s-1725kB/s), io=393MiB (412MB), run=10004-10032msec 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.077 bdev_null0 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.077 [2024-11-19 03:22:24.366401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.077 bdev_null1 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:15.077 { 00:41:15.077 "params": { 00:41:15.077 "name": "Nvme$subsystem", 00:41:15.077 "trtype": "$TEST_TRANSPORT", 00:41:15.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:15.077 "adrfam": "ipv4", 00:41:15.077 "trsvcid": "$NVMF_PORT", 00:41:15.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:15.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:15.077 "hdgst": ${hdgst:-false}, 00:41:15.077 "ddgst": ${ddgst:-false} 00:41:15.077 }, 00:41:15.077 "method": "bdev_nvme_attach_controller" 00:41:15.077 } 00:41:15.077 EOF 00:41:15.077 )") 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:15.077 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:15.078 { 00:41:15.078 "params": { 00:41:15.078 "name": "Nvme$subsystem", 00:41:15.078 "trtype": "$TEST_TRANSPORT", 00:41:15.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:15.078 "adrfam": "ipv4", 00:41:15.078 "trsvcid": "$NVMF_PORT", 00:41:15.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:15.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:15.078 "hdgst": ${hdgst:-false}, 00:41:15.078 "ddgst": ${ddgst:-false} 00:41:15.078 }, 00:41:15.078 "method": "bdev_nvme_attach_controller" 00:41:15.078 } 00:41:15.078 EOF 00:41:15.078 )") 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:15.078 "params": { 00:41:15.078 "name": "Nvme0", 00:41:15.078 "trtype": "tcp", 00:41:15.078 "traddr": "10.0.0.2", 00:41:15.078 "adrfam": "ipv4", 00:41:15.078 "trsvcid": "4420", 00:41:15.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:15.078 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:15.078 "hdgst": false, 00:41:15.078 "ddgst": false 00:41:15.078 }, 00:41:15.078 "method": "bdev_nvme_attach_controller" 00:41:15.078 },{ 00:41:15.078 "params": { 00:41:15.078 "name": "Nvme1", 00:41:15.078 "trtype": "tcp", 00:41:15.078 "traddr": "10.0.0.2", 00:41:15.078 "adrfam": "ipv4", 00:41:15.078 "trsvcid": "4420", 00:41:15.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:15.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:15.078 "hdgst": false, 00:41:15.078 "ddgst": false 00:41:15.078 }, 00:41:15.078 "method": "bdev_nvme_attach_controller" 00:41:15.078 }' 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:15.078 03:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:15.078 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:15.078 ... 00:41:15.078 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:15.078 ... 00:41:15.078 fio-3.35 00:41:15.078 Starting 4 threads 00:41:20.431 00:41:20.431 filename0: (groupid=0, jobs=1): err= 0: pid=470261: Tue Nov 19 03:22:30 2024 00:41:20.431 read: IOPS=1808, BW=14.1MiB/s (14.8MB/s)(70.7MiB/5004msec) 00:41:20.431 slat (nsec): min=3953, max=72946, avg=20953.39, stdev=9567.34 00:41:20.431 clat (usec): min=826, max=9713, avg=4345.51, stdev=709.49 00:41:20.431 lat (usec): min=846, max=9734, avg=4366.46, stdev=709.43 00:41:20.431 clat percentiles (usec): 00:41:20.431 | 1.00th=[ 2114], 5.00th=[ 3490], 10.00th=[ 3818], 20.00th=[ 4080], 00:41:20.431 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4293], 00:41:20.431 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 5211], 95.00th=[ 5604], 00:41:20.431 | 99.00th=[ 6915], 99.50th=[ 7439], 99.90th=[ 8848], 99.95th=[ 9503], 00:41:20.431 | 99.99th=[ 9765] 00:41:20.431 bw ( KiB/s): min=13104, max=14864, per=24.70%, avg=14468.80, stdev=512.11, samples=10 00:41:20.431 iops : min= 1638, max= 1858, avg=1808.60, stdev=64.01, samples=10 00:41:20.431 lat (usec) : 1000=0.07% 00:41:20.431 lat (msec) : 2=0.84%, 4=13.53%, 10=85.56% 00:41:20.431 cpu : usr=95.70%, sys=3.78%, ctx=9, majf=0, minf=0 00:41:20.431 IO depths : 1=0.5%, 2=17.0%, 4=55.9%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.431 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.431 issued rwts: total=9051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.431 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:20.431 filename0: (groupid=0, jobs=1): err= 0: pid=470262: Tue Nov 19 03:22:30 2024 00:41:20.431 read: IOPS=1828, BW=14.3MiB/s (15.0MB/s)(71.5MiB/5002msec) 00:41:20.431 slat (nsec): min=6639, max=81001, avg=18184.81, stdev=11495.87 00:41:20.431 clat (usec): min=727, max=10493, avg=4312.20, stdev=640.46 00:41:20.431 lat (usec): min=744, max=10514, avg=4330.38, stdev=640.57 00:41:20.431 clat percentiles (usec): 00:41:20.431 | 1.00th=[ 2442], 5.00th=[ 3490], 10.00th=[ 3818], 20.00th=[ 4047], 00:41:20.431 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4293], 00:41:20.431 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4948], 95.00th=[ 5473], 00:41:20.431 | 99.00th=[ 6652], 99.50th=[ 7177], 99.90th=[ 8029], 99.95th=[ 8848], 00:41:20.431 | 99.99th=[10552] 00:41:20.431 bw ( KiB/s): min=12912, max=15328, per=24.97%, avg=14626.90, stdev=650.53, samples=10 00:41:20.431 iops : min= 1614, max= 1916, avg=1828.30, stdev=81.30, samples=10 00:41:20.431 lat (usec) : 750=0.02%, 1000=0.05% 00:41:20.431 lat (msec) : 2=0.58%, 4=16.56%, 10=82.77%, 20=0.01% 00:41:20.431 cpu : usr=95.28%, sys=4.22%, ctx=13, majf=0, minf=9 00:41:20.431 IO depths : 1=0.3%, 2=15.8%, 4=56.7%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.432 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.432 issued rwts: total=9148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.432 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:20.432 filename1: (groupid=0, jobs=1): err= 0: pid=470263: Tue Nov 19 03:22:30 2024 00:41:20.432 read: IOPS=1838, BW=14.4MiB/s (15.1MB/s)(71.8MiB/5001msec) 00:41:20.432 slat (nsec): min=6994, max=72378, avg=15114.02, stdev=9709.45 00:41:20.432 clat (usec): min=831, max=8198, avg=4302.62, stdev=597.97 00:41:20.432 lat (usec): min=843, max=8217, avg=4317.73, stdev=598.19 00:41:20.432 clat percentiles (usec): 00:41:20.432 | 1.00th=[ 2507], 5.00th=[ 3523], 10.00th=[ 3785], 20.00th=[ 4047], 00:41:20.432 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4293], 00:41:20.432 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4883], 95.00th=[ 5407], 00:41:20.432 | 99.00th=[ 6259], 99.50th=[ 6915], 99.90th=[ 7635], 99.95th=[ 7832], 00:41:20.432 | 99.99th=[ 8225] 00:41:20.432 bw ( KiB/s): min=12976, max=15312, per=25.19%, avg=14759.11, stdev=695.08, samples=9 00:41:20.432 iops : min= 1622, max= 1914, avg=1844.89, stdev=86.89, samples=9 00:41:20.432 lat (usec) : 1000=0.08% 00:41:20.432 lat (msec) : 2=0.33%, 4=16.86%, 10=82.74% 00:41:20.432 cpu : usr=95.44%, sys=4.10%, ctx=7, majf=0, minf=0 00:41:20.432 IO depths : 1=0.3%, 2=10.5%, 4=60.8%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.432 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.432 issued rwts: total=9196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.432 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:20.432 filename1: (groupid=0, jobs=1): err= 0: pid=470264: Tue Nov 19 03:22:30 2024 00:41:20.432 read: IOPS=1848, BW=14.4MiB/s (15.1MB/s)(72.2MiB/5002msec) 00:41:20.432 slat (nsec): min=6541, max=90004, avg=20041.18, stdev=11681.48 00:41:20.432 clat (usec): min=929, max=8926, avg=4252.82, stdev=639.09 00:41:20.432 lat (usec): min=941, max=8957, avg=4272.86, stdev=640.00 00:41:20.432 clat percentiles (usec): 00:41:20.432 | 1.00th=[ 2311], 5.00th=[ 3359], 10.00th=[ 3654], 20.00th=[ 3982], 00:41:20.432 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:41:20.432 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4883], 95.00th=[ 5407], 00:41:20.432 | 99.00th=[ 6521], 99.50th=[ 7046], 99.90th=[ 7635], 99.95th=[ 8160], 00:41:20.432 | 99.99th=[ 8979] 00:41:20.432 bw ( KiB/s): min=13104, max=15584, per=25.24%, avg=14786.90, stdev=658.10, samples=10 00:41:20.432 iops : min= 1638, max= 1948, avg=1848.30, stdev=82.26, samples=10 00:41:20.432 lat (usec) : 1000=0.02% 00:41:20.432 lat (msec) : 2=0.75%, 4=19.56%, 10=79.67% 00:41:20.432 cpu : usr=95.40%, sys=4.12%, ctx=7, majf=0, minf=0 00:41:20.432 IO depths : 1=0.6%, 2=18.6%, 4=55.0%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.432 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.432 issued rwts: total=9248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.432 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:20.432 00:41:20.432 Run status group 0 (all jobs): 00:41:20.432 READ: bw=57.2MiB/s (60.0MB/s), 14.1MiB/s-14.4MiB/s (14.8MB/s-15.1MB/s), io=286MiB (300MB), run=5001-5004msec 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.432 00:41:20.432 real 0m24.105s 00:41:20.432 user 4m32.861s 00:41:20.432 sys 0m6.296s 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:20.432 03:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:20.432 ************************************ 00:41:20.432 END TEST fio_dif_rand_params 00:41:20.432 ************************************ 00:41:20.432 03:22:30 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:20.432 03:22:30 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:20.432 03:22:30 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:20.432 03:22:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:20.432 ************************************ 00:41:20.432 START TEST fio_dif_digest 00:41:20.432 ************************************ 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:20.432 bdev_null0 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:20.432 [2024-11-19 03:22:30.688958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:20.432 { 00:41:20.432 "params": { 00:41:20.432 "name": "Nvme$subsystem", 00:41:20.432 "trtype": "$TEST_TRANSPORT", 00:41:20.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.432 "adrfam": "ipv4", 00:41:20.432 "trsvcid": "$NVMF_PORT", 00:41:20.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.432 "hdgst": ${hdgst:-false}, 00:41:20.432 "ddgst": ${ddgst:-false} 00:41:20.432 }, 00:41:20.432 "method": "bdev_nvme_attach_controller" 00:41:20.432 } 00:41:20.432 EOF 00:41:20.432 )") 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:20.432 "params": { 00:41:20.432 "name": "Nvme0", 00:41:20.432 "trtype": "tcp", 00:41:20.432 "traddr": "10.0.0.2", 00:41:20.432 "adrfam": "ipv4", 00:41:20.432 "trsvcid": "4420", 00:41:20.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:20.432 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:20.432 "hdgst": true, 00:41:20.432 "ddgst": true 00:41:20.432 }, 00:41:20.432 "method": "bdev_nvme_attach_controller" 00:41:20.432 }' 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:20.432 03:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.432 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:20.432 ... 00:41:20.432 fio-3.35 00:41:20.432 Starting 3 threads 00:41:32.687 00:41:32.687 filename0: (groupid=0, jobs=1): err= 0: pid=471022: Tue Nov 19 03:22:41 2024 00:41:32.687 read: IOPS=210, BW=26.3MiB/s (27.5MB/s)(263MiB/10008msec) 00:41:32.687 slat (nsec): min=7756, max=83642, avg=14161.58, stdev=4153.04 00:41:32.687 clat (usec): min=10789, max=53251, avg=14263.83, stdev=1701.80 00:41:32.687 lat (usec): min=10821, max=53264, avg=14277.99, stdev=1701.51 00:41:32.687 clat percentiles (usec): 00:41:32.687 | 1.00th=[11863], 5.00th=[12518], 10.00th=[12911], 20.00th=[13304], 00:41:32.687 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:41:32.687 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[15926], 00:41:32.687 | 99.00th=[16712], 99.50th=[17171], 99.90th=[45876], 99.95th=[50594], 00:41:32.687 | 99.99th=[53216] 00:41:32.687 bw ( KiB/s): min=24320, max=28160, per=34.84%, avg=26880.00, stdev=859.15, samples=20 00:41:32.687 iops : min= 190, max= 220, avg=210.00, stdev= 6.71, samples=20 00:41:32.687 lat (msec) : 20=99.86%, 50=0.05%, 100=0.10% 00:41:32.687 cpu : usr=92.50%, sys=6.99%, ctx=32, majf=0, minf=168 00:41:32.687 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:32.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.687 issued rwts: total=2102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:32.687 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:32.687 filename0: (groupid=0, jobs=1): err= 0: pid=471023: Tue Nov 19 03:22:41 2024 00:41:32.687 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(249MiB/10044msec) 00:41:32.687 slat (nsec): min=8055, max=36057, avg=13728.30, stdev=2368.23 00:41:32.687 clat (usec): min=8506, max=49127, avg=15080.12, stdev=1489.84 00:41:32.687 lat (usec): min=8518, max=49139, avg=15093.85, stdev=1489.97 00:41:32.687 clat percentiles (usec): 00:41:32.687 | 1.00th=[12518], 5.00th=[13435], 10.00th=[13829], 20.00th=[14353], 00:41:32.687 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15008], 60.00th=[15270], 00:41:32.687 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16319], 95.00th=[16712], 00:41:32.687 | 99.00th=[17433], 99.50th=[17957], 99.90th=[46924], 99.95th=[49021], 00:41:32.687 | 99.99th=[49021] 00:41:32.687 bw ( KiB/s): min=24576, max=27136, per=33.03%, avg=25484.80, stdev=547.64, samples=20 00:41:32.687 iops : min= 192, max= 212, avg=199.10, stdev= 4.28, samples=20 00:41:32.687 lat (msec) : 10=0.35%, 20=99.55%, 50=0.10% 00:41:32.687 cpu : usr=93.40%, sys=6.07%, ctx=24, majf=0, minf=191 00:41:32.687 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:32.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.687 issued rwts: total=1993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:32.687 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:32.687 filename0: (groupid=0, jobs=1): err= 0: pid=471024: Tue Nov 19 03:22:41 2024 00:41:32.687 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(245MiB/10045msec) 00:41:32.687 slat (nsec): min=7852, max=75029, avg=13596.86, stdev=2751.64 00:41:32.687 clat (usec): min=9085, max=52531, avg=15344.42, stdev=1490.13 00:41:32.687 lat (usec): min=9097, max=52543, avg=15358.02, stdev=1490.27 00:41:32.687 clat percentiles (usec): 00:41:32.687 | 1.00th=[12780], 5.00th=[13829], 10.00th=[14222], 20.00th=[14615], 00:41:32.687 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15270], 60.00th=[15533], 00:41:32.687 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16450], 95.00th=[16909], 00:41:32.687 | 99.00th=[17957], 99.50th=[18220], 99.90th=[45876], 99.95th=[52691], 00:41:32.687 | 99.99th=[52691] 00:41:32.687 bw ( KiB/s): min=24320, max=25600, per=32.47%, avg=25049.60, stdev=364.65, samples=20 00:41:32.687 iops : min= 190, max= 200, avg=195.70, stdev= 2.85, samples=20 00:41:32.687 lat (msec) : 10=0.15%, 20=99.74%, 50=0.05%, 100=0.05% 00:41:32.687 cpu : usr=93.61%, sys=5.89%, ctx=23, majf=0, minf=223 00:41:32.687 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:32.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.688 issued rwts: total=1959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:32.688 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:32.688 00:41:32.688 Run status group 0 (all jobs): 00:41:32.688 READ: bw=75.3MiB/s (79.0MB/s), 24.4MiB/s-26.3MiB/s (25.6MB/s-27.5MB/s), io=757MiB (794MB), run=10008-10045msec 00:41:32.688 03:22:41 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:32.688 03:22:41 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:32.688 03:22:41 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:32.688 03:22:41 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:32.688 03:22:41 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:32.688 03:22:41 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:32.688 03:22:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.688 03:22:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:32.688 03:22:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.688 03:22:41 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:32.688 03:22:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.688 03:22:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:32.688 03:22:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.688 00:41:32.688 real 0m11.085s 00:41:32.688 user 0m29.085s 00:41:32.688 sys 0m2.179s 00:41:32.688 03:22:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:32.688 03:22:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:32.688 ************************************ 00:41:32.688 END TEST fio_dif_digest 00:41:32.688 ************************************ 00:41:32.688 03:22:41 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:32.688 03:22:41 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:32.688 03:22:41 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:32.688 03:22:41 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:41:32.688 03:22:41 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:32.688 03:22:41 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:41:32.688 03:22:41 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:32.688 03:22:41 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:32.688 rmmod nvme_tcp 00:41:32.688 rmmod nvme_fabrics 00:41:32.688 rmmod nvme_keyring 00:41:32.688 03:22:41 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:32.688 03:22:41 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:41:32.688 03:22:41 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:41:32.688 03:22:41 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 465098 ']' 00:41:32.688 03:22:41 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 465098 00:41:32.688 03:22:41 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 465098 ']' 00:41:32.688 03:22:41 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 465098 00:41:32.688 03:22:41 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:41:32.688 03:22:41 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:32.688 03:22:41 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465098 00:41:32.688 03:22:41 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:32.688 03:22:41 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:32.688 03:22:41 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465098' 00:41:32.688 killing process with pid 465098 00:41:32.688 03:22:41 nvmf_dif -- common/autotest_common.sh@973 -- # kill 465098 00:41:32.688 03:22:41 nvmf_dif -- common/autotest_common.sh@978 -- # wait 465098 00:41:32.688 03:22:42 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:41:32.688 03:22:42 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:32.688 Waiting for block devices as requested 00:41:32.688 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:41:32.946 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:32.947 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:32.947 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:32.947 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:33.204 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:33.204 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:33.204 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:33.204 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:33.462 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:33.462 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:33.462 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:33.462 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:33.720 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:33.720 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:33.721 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:33.981 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:33.981 03:22:44 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:33.981 03:22:44 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:33.981 03:22:44 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:41:33.981 03:22:44 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:41:33.981 03:22:44 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:33.981 03:22:44 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:41:33.981 03:22:44 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:33.981 03:22:44 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:33.981 03:22:44 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:33.981 03:22:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:33.981 03:22:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:36.523 03:22:46 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:36.523 00:41:36.523 real 1m6.429s 00:41:36.523 user 6m29.683s 00:41:36.523 sys 0m17.197s 00:41:36.523 03:22:46 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:36.523 03:22:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:36.523 ************************************ 00:41:36.523 END TEST nvmf_dif 00:41:36.523 ************************************ 00:41:36.523 03:22:46 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:36.523 03:22:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:36.523 03:22:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:36.523 03:22:46 -- common/autotest_common.sh@10 -- # set +x 00:41:36.523 ************************************ 00:41:36.523 START TEST nvmf_abort_qd_sizes 00:41:36.523 ************************************ 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:36.523 * Looking for test storage... 00:41:36.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:36.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.523 --rc genhtml_branch_coverage=1 00:41:36.523 --rc genhtml_function_coverage=1 00:41:36.523 --rc genhtml_legend=1 00:41:36.523 --rc geninfo_all_blocks=1 00:41:36.523 --rc geninfo_unexecuted_blocks=1 00:41:36.523 00:41:36.523 ' 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:36.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.523 --rc genhtml_branch_coverage=1 00:41:36.523 --rc genhtml_function_coverage=1 00:41:36.523 --rc genhtml_legend=1 00:41:36.523 --rc geninfo_all_blocks=1 00:41:36.523 --rc geninfo_unexecuted_blocks=1 00:41:36.523 00:41:36.523 ' 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:36.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.523 --rc genhtml_branch_coverage=1 00:41:36.523 --rc genhtml_function_coverage=1 00:41:36.523 --rc genhtml_legend=1 00:41:36.523 --rc geninfo_all_blocks=1 00:41:36.523 --rc geninfo_unexecuted_blocks=1 00:41:36.523 00:41:36.523 ' 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:36.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.523 --rc genhtml_branch_coverage=1 00:41:36.523 --rc genhtml_function_coverage=1 00:41:36.523 --rc genhtml_legend=1 00:41:36.523 --rc geninfo_all_blocks=1 00:41:36.523 --rc geninfo_unexecuted_blocks=1 00:41:36.523 00:41:36.523 ' 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:36.523 03:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:36.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:41:36.524 03:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:38.427 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:38.427 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:38.428 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:38.428 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:38.428 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:38.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:38.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:41:38.428 00:41:38.428 --- 10.0.0.2 ping statistics --- 00:41:38.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:38.428 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:38.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:38.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:41:38.428 00:41:38.428 --- 10.0.0.1 ping statistics --- 00:41:38.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:38.428 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:38.428 03:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:39.806 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:39.806 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:39.806 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:39.806 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:39.806 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:39.806 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:39.806 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:39.806 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:39.806 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:39.806 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:39.806 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:39.806 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:39.806 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:39.806 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:39.806 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:39.806 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:40.745 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=475934 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 475934 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 475934 ']' 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:40.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:40.745 03:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:41.004 [2024-11-19 03:22:51.375605] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:41:41.004 [2024-11-19 03:22:51.375704] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:41.004 [2024-11-19 03:22:51.450339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:41.004 [2024-11-19 03:22:51.498809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:41.004 [2024-11-19 03:22:51.498860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:41.004 [2024-11-19 03:22:51.498886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:41.004 [2024-11-19 03:22:51.498897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:41.004 [2024-11-19 03:22:51.498907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:41.004 [2024-11-19 03:22:51.500354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:41.004 [2024-11-19 03:22:51.500420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:41.004 [2024-11-19 03:22:51.500500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:41.004 [2024-11-19 03:22:51.500495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:41.004 03:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:41.004 03:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:41:41.004 03:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:41.004 03:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:41.004 03:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:41.263 03:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:41.263 ************************************ 00:41:41.263 START TEST spdk_target_abort 00:41:41.263 ************************************ 00:41:41.263 03:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:41:41.263 03:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:41.263 03:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:41:41.263 03:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.263 03:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:44.543 spdk_targetn1 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:44.543 [2024-11-19 03:22:54.512862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:44.543 [2024-11-19 03:22:54.557231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:41:44.543 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:44.544 03:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:47.823 Initializing NVMe Controllers 00:41:47.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:47.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:47.823 Initialization complete. Launching workers. 00:41:47.823 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13626, failed: 0 00:41:47.823 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1208, failed to submit 12418 00:41:47.823 success 747, unsuccessful 461, failed 0 00:41:47.823 03:22:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:47.823 03:22:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:51.103 Initializing NVMe Controllers 00:41:51.103 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:51.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:51.103 Initialization complete. Launching workers. 00:41:51.103 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8625, failed: 0 00:41:51.103 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1231, failed to submit 7394 00:41:51.103 success 328, unsuccessful 903, failed 0 00:41:51.103 03:23:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:51.103 03:23:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:54.395 Initializing NVMe Controllers 00:41:54.395 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:54.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:54.395 Initialization complete. Launching workers. 00:41:54.395 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31705, failed: 0 00:41:54.395 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2749, failed to submit 28956 00:41:54.395 success 534, unsuccessful 2215, failed 0 00:41:54.395 03:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:41:54.395 03:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.395 03:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:54.395 03:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.395 03:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:41:54.395 03:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.395 03:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:55.330 03:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.330 03:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 475934 00:41:55.330 03:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 475934 ']' 00:41:55.330 03:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 475934 00:41:55.330 03:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:41:55.330 03:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 475934 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 475934' 00:41:55.331 killing process with pid 475934 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 475934 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 475934 00:41:55.331 00:41:55.331 real 0m14.222s 00:41:55.331 user 0m54.252s 00:41:55.331 sys 0m2.353s 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:55.331 ************************************ 00:41:55.331 END TEST spdk_target_abort 00:41:55.331 ************************************ 00:41:55.331 03:23:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:41:55.331 03:23:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:55.331 03:23:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:55.331 03:23:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:55.331 ************************************ 00:41:55.331 START TEST kernel_target_abort 00:41:55.331 ************************************ 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:55.331 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:55.589 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:55.589 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:55.589 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:55.589 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:41:55.589 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:41:55.589 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:55.589 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:55.589 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:55.589 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:41:55.589 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:41:55.589 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:41:55.589 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:55.589 03:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:56.524 Waiting for block devices as requested 00:41:56.784 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:41:56.784 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:56.784 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:57.043 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:57.043 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:57.043 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:57.303 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:57.303 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:57.303 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:57.303 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:57.562 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:57.562 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:57.562 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:57.821 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:57.821 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:57.821 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:57.821 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:58.081 No valid GPT data, bailing 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:41:58.081 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:41:58.082 00:41:58.082 Discovery Log Number of Records 2, Generation counter 2 00:41:58.082 =====Discovery Log Entry 0====== 00:41:58.082 trtype: tcp 00:41:58.082 adrfam: ipv4 00:41:58.082 subtype: current discovery subsystem 00:41:58.082 treq: not specified, sq flow control disable supported 00:41:58.082 portid: 1 00:41:58.082 trsvcid: 4420 00:41:58.082 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:58.082 traddr: 10.0.0.1 00:41:58.082 eflags: none 00:41:58.082 sectype: none 00:41:58.082 =====Discovery Log Entry 1====== 00:41:58.082 trtype: tcp 00:41:58.082 adrfam: ipv4 00:41:58.082 subtype: nvme subsystem 00:41:58.082 treq: not specified, sq flow control disable supported 00:41:58.082 portid: 1 00:41:58.082 trsvcid: 4420 00:41:58.082 subnqn: nqn.2016-06.io.spdk:testnqn 00:41:58.082 traddr: 10.0.0.1 00:41:58.082 eflags: none 00:41:58.082 sectype: none 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:58.082 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:58.341 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:58.341 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:41:58.341 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:58.341 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:41:58.341 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:58.341 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:58.341 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:58.341 03:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:01.622 Initializing NVMe Controllers 00:42:01.622 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:01.622 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:01.622 Initialization complete. Launching workers. 00:42:01.622 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56056, failed: 0 00:42:01.622 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56056, failed to submit 0 00:42:01.622 success 0, unsuccessful 56056, failed 0 00:42:01.622 03:23:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:01.622 03:23:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:04.904 Initializing NVMe Controllers 00:42:04.904 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:04.904 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:04.904 Initialization complete. Launching workers. 00:42:04.904 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 99745, failed: 0 00:42:04.904 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25134, failed to submit 74611 00:42:04.904 success 0, unsuccessful 25134, failed 0 00:42:04.904 03:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:04.904 03:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:08.185 Initializing NVMe Controllers 00:42:08.186 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:08.186 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:08.186 Initialization complete. Launching workers. 00:42:08.186 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98566, failed: 0 00:42:08.186 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24666, failed to submit 73900 00:42:08.186 success 0, unsuccessful 24666, failed 0 00:42:08.186 03:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:08.186 03:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:08.186 03:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:42:08.186 03:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:08.186 03:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:08.186 03:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:08.186 03:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:08.186 03:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:42:08.186 03:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:42:08.186 03:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:08.765 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:08.765 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:08.765 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:08.765 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:08.765 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:08.765 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:08.765 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:08.765 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:08.765 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:08.765 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:08.765 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:08.765 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:08.765 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:08.765 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:09.023 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:09.023 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:09.963 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:09.963 00:42:09.963 real 0m14.446s 00:42:09.963 user 0m6.664s 00:42:09.963 sys 0m3.285s 00:42:09.963 03:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:09.963 03:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:09.963 ************************************ 00:42:09.963 END TEST kernel_target_abort 00:42:09.963 ************************************ 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:09.963 rmmod nvme_tcp 00:42:09.963 rmmod nvme_fabrics 00:42:09.963 rmmod nvme_keyring 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 475934 ']' 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 475934 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 475934 ']' 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 475934 00:42:09.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (475934) - No such process 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 475934 is not found' 00:42:09.963 Process with pid 475934 is not found 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:09.963 03:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:10.899 Waiting for block devices as requested 00:42:11.158 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:11.158 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:11.416 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:11.416 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:11.416 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:11.416 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:11.674 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:11.674 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:11.674 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:11.933 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:11.933 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:11.933 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:11.933 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:12.190 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:12.190 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:12.190 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:12.190 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:12.448 03:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:12.448 03:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:12.448 03:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:12.448 03:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:12.448 03:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:42:12.448 03:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:42:12.448 03:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:12.448 03:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:12.448 03:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:12.448 03:23:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:12.448 03:23:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:14.354 03:23:24 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:14.354 00:42:14.354 real 0m38.406s 00:42:14.354 user 1m3.191s 00:42:14.354 sys 0m9.191s 00:42:14.354 03:23:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:14.354 03:23:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:14.354 ************************************ 00:42:14.354 END TEST nvmf_abort_qd_sizes 00:42:14.354 ************************************ 00:42:14.613 03:23:24 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:14.613 03:23:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:14.613 03:23:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:14.613 03:23:24 -- common/autotest_common.sh@10 -- # set +x 00:42:14.613 ************************************ 00:42:14.613 START TEST keyring_file 00:42:14.613 ************************************ 00:42:14.613 03:23:25 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:14.613 * Looking for test storage... 00:42:14.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:14.613 03:23:25 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:14.613 03:23:25 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:42:14.613 03:23:25 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:14.613 03:23:25 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:14.613 03:23:25 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:14.613 03:23:25 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:14.613 03:23:25 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:14.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:14.613 --rc genhtml_branch_coverage=1 00:42:14.613 --rc genhtml_function_coverage=1 00:42:14.613 --rc genhtml_legend=1 00:42:14.613 --rc geninfo_all_blocks=1 00:42:14.613 --rc geninfo_unexecuted_blocks=1 00:42:14.613 00:42:14.613 ' 00:42:14.613 03:23:25 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:14.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:14.613 --rc genhtml_branch_coverage=1 00:42:14.614 --rc genhtml_function_coverage=1 00:42:14.614 --rc genhtml_legend=1 00:42:14.614 --rc geninfo_all_blocks=1 00:42:14.614 --rc geninfo_unexecuted_blocks=1 00:42:14.614 00:42:14.614 ' 00:42:14.614 03:23:25 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:14.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:14.614 --rc genhtml_branch_coverage=1 00:42:14.614 --rc genhtml_function_coverage=1 00:42:14.614 --rc genhtml_legend=1 00:42:14.614 --rc geninfo_all_blocks=1 00:42:14.614 --rc geninfo_unexecuted_blocks=1 00:42:14.614 00:42:14.614 ' 00:42:14.614 03:23:25 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:14.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:14.614 --rc genhtml_branch_coverage=1 00:42:14.614 --rc genhtml_function_coverage=1 00:42:14.614 --rc genhtml_legend=1 00:42:14.614 --rc geninfo_all_blocks=1 00:42:14.614 --rc geninfo_unexecuted_blocks=1 00:42:14.614 00:42:14.614 ' 00:42:14.614 03:23:25 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:14.614 03:23:25 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:14.614 03:23:25 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:14.614 03:23:25 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:14.614 03:23:25 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:14.614 03:23:25 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.614 03:23:25 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.614 03:23:25 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.614 03:23:25 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:14.614 03:23:25 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:14.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:14.614 03:23:25 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:14.614 03:23:25 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:14.614 03:23:25 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:14.614 03:23:25 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:14.614 03:23:25 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:14.614 03:23:25 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.I0J5VX0H5b 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.I0J5VX0H5b 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.I0J5VX0H5b 00:42:14.614 03:23:25 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.I0J5VX0H5b 00:42:14.614 03:23:25 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.B3VkgSGMI4 00:42:14.614 03:23:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:14.614 03:23:25 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:14.873 03:23:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.B3VkgSGMI4 00:42:14.873 03:23:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.B3VkgSGMI4 00:42:14.873 03:23:25 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.B3VkgSGMI4 00:42:14.873 03:23:25 keyring_file -- keyring/file.sh@30 -- # tgtpid=481692 00:42:14.873 03:23:25 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:14.873 03:23:25 keyring_file -- keyring/file.sh@32 -- # waitforlisten 481692 00:42:14.873 03:23:25 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 481692 ']' 00:42:14.873 03:23:25 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:14.873 03:23:25 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:14.873 03:23:25 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:14.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:14.873 03:23:25 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:14.873 03:23:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:14.873 [2024-11-19 03:23:25.297214] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:42:14.873 [2024-11-19 03:23:25.297306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481692 ] 00:42:14.873 [2024-11-19 03:23:25.361722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:14.873 [2024-11-19 03:23:25.408230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:15.132 03:23:25 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:15.132 [2024-11-19 03:23:25.645965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:15.132 null0 00:42:15.132 [2024-11-19 03:23:25.678057] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:15.132 [2024-11-19 03:23:25.678512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.132 03:23:25 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:15.132 [2024-11-19 03:23:25.702086] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:15.132 request: 00:42:15.132 { 00:42:15.132 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:15.132 "secure_channel": false, 00:42:15.132 "listen_address": { 00:42:15.132 "trtype": "tcp", 00:42:15.132 "traddr": "127.0.0.1", 00:42:15.132 "trsvcid": "4420" 00:42:15.132 }, 00:42:15.132 "method": "nvmf_subsystem_add_listener", 00:42:15.132 "req_id": 1 00:42:15.132 } 00:42:15.132 Got JSON-RPC error response 00:42:15.132 response: 00:42:15.132 { 00:42:15.132 "code": -32602, 00:42:15.132 "message": "Invalid parameters" 00:42:15.132 } 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:15.132 03:23:25 keyring_file -- keyring/file.sh@47 -- # bperfpid=481709 00:42:15.132 03:23:25 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:15.132 03:23:25 keyring_file -- keyring/file.sh@49 -- # waitforlisten 481709 /var/tmp/bperf.sock 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 481709 ']' 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:15.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:15.132 03:23:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:15.390 [2024-11-19 03:23:25.751077] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:42:15.390 [2024-11-19 03:23:25.751170] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481709 ] 00:42:15.390 [2024-11-19 03:23:25.817226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:15.390 [2024-11-19 03:23:25.861949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:15.390 03:23:25 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:15.390 03:23:25 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:15.390 03:23:25 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.I0J5VX0H5b 00:42:15.390 03:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.I0J5VX0H5b 00:42:15.648 03:23:26 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.B3VkgSGMI4 00:42:15.648 03:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.B3VkgSGMI4 00:42:15.905 03:23:26 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:15.905 03:23:26 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:15.905 03:23:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:15.905 03:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:15.905 03:23:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:16.470 03:23:26 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.I0J5VX0H5b == \/\t\m\p\/\t\m\p\.\I\0\J\5\V\X\0\H\5\b ]] 00:42:16.470 03:23:26 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:16.470 03:23:26 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:16.470 03:23:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:16.470 03:23:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:16.470 03:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:16.470 03:23:27 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.B3VkgSGMI4 == \/\t\m\p\/\t\m\p\.\B\3\V\k\g\S\G\M\I\4 ]] 00:42:16.470 03:23:27 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:16.470 03:23:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:16.470 03:23:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:16.470 03:23:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:16.470 03:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:16.470 03:23:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:16.729 03:23:27 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:16.729 03:23:27 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:16.729 03:23:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:16.729 03:23:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:16.988 03:23:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:16.988 03:23:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:16.988 03:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:17.246 03:23:27 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:17.246 03:23:27 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:17.246 03:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:17.503 [2024-11-19 03:23:27.868523] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:17.503 nvme0n1 00:42:17.503 03:23:27 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:17.503 03:23:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:17.503 03:23:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:17.503 03:23:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:17.503 03:23:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:17.503 03:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:17.789 03:23:28 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:17.789 03:23:28 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:17.789 03:23:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:17.789 03:23:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:17.789 03:23:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:17.789 03:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:17.789 03:23:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:18.047 03:23:28 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:18.047 03:23:28 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:18.047 Running I/O for 1 seconds... 00:42:19.420 10367.00 IOPS, 40.50 MiB/s 00:42:19.420 Latency(us) 00:42:19.420 [2024-11-19T02:23:30.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:19.420 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:19.420 nvme0n1 : 1.01 10409.50 40.66 0.00 0.00 12252.91 3786.52 17670.45 00:42:19.420 [2024-11-19T02:23:30.035Z] =================================================================================================================== 00:42:19.420 [2024-11-19T02:23:30.035Z] Total : 10409.50 40.66 0.00 0.00 12252.91 3786.52 17670.45 00:42:19.420 { 00:42:19.420 "results": [ 00:42:19.420 { 00:42:19.420 "job": "nvme0n1", 00:42:19.420 "core_mask": "0x2", 00:42:19.420 "workload": "randrw", 00:42:19.420 "percentage": 50, 00:42:19.420 "status": "finished", 00:42:19.420 "queue_depth": 128, 00:42:19.420 "io_size": 4096, 00:42:19.420 "runtime": 1.00831, 00:42:19.420 "iops": 10409.497079271256, 00:42:19.420 "mibps": 40.66209796590334, 00:42:19.420 "io_failed": 0, 00:42:19.420 "io_timeout": 0, 00:42:19.420 "avg_latency_us": 12252.905799457994, 00:42:19.420 "min_latency_us": 3786.5244444444443, 00:42:19.420 "max_latency_us": 17670.447407407406 00:42:19.420 } 00:42:19.420 ], 00:42:19.420 "core_count": 1 00:42:19.420 } 00:42:19.420 03:23:29 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:19.420 03:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:19.420 03:23:29 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:19.420 03:23:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:19.420 03:23:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:19.420 03:23:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:19.420 03:23:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:19.420 03:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:19.678 03:23:30 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:19.678 03:23:30 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:19.678 03:23:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:19.678 03:23:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:19.678 03:23:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:19.678 03:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:19.678 03:23:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:19.935 03:23:30 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:19.935 03:23:30 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:19.935 03:23:30 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:19.935 03:23:30 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:19.935 03:23:30 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:19.935 03:23:30 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:19.935 03:23:30 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:19.935 03:23:30 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:19.935 03:23:30 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:19.935 03:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:20.193 [2024-11-19 03:23:30.777307] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:20.193 [2024-11-19 03:23:30.778173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387b70 (107): Transport endpoint is not connected 00:42:20.193 [2024-11-19 03:23:30.779166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387b70 (9): Bad file descriptor 00:42:20.193 [2024-11-19 03:23:30.780165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:20.193 [2024-11-19 03:23:30.780186] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:20.193 [2024-11-19 03:23:30.780215] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:20.193 [2024-11-19 03:23:30.780231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:20.193 request: 00:42:20.193 { 00:42:20.193 "name": "nvme0", 00:42:20.193 "trtype": "tcp", 00:42:20.193 "traddr": "127.0.0.1", 00:42:20.193 "adrfam": "ipv4", 00:42:20.193 "trsvcid": "4420", 00:42:20.193 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:20.193 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:20.193 "prchk_reftag": false, 00:42:20.193 "prchk_guard": false, 00:42:20.193 "hdgst": false, 00:42:20.193 "ddgst": false, 00:42:20.193 "psk": "key1", 00:42:20.193 "allow_unrecognized_csi": false, 00:42:20.193 "method": "bdev_nvme_attach_controller", 00:42:20.193 "req_id": 1 00:42:20.193 } 00:42:20.193 Got JSON-RPC error response 00:42:20.193 response: 00:42:20.193 { 00:42:20.193 "code": -5, 00:42:20.193 "message": "Input/output error" 00:42:20.193 } 00:42:20.193 03:23:30 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:20.193 03:23:30 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:20.193 03:23:30 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:20.193 03:23:30 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:20.193 03:23:30 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:20.193 03:23:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:20.193 03:23:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:20.193 03:23:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:20.193 03:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:20.193 03:23:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:20.759 03:23:31 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:20.759 03:23:31 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:20.759 03:23:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:20.759 03:23:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:20.759 03:23:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:20.759 03:23:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:20.759 03:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:20.759 03:23:31 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:20.759 03:23:31 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:20.759 03:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:21.016 03:23:31 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:21.016 03:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:21.582 03:23:31 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:21.582 03:23:31 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:21.582 03:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:21.582 03:23:32 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:21.582 03:23:32 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.I0J5VX0H5b 00:42:21.582 03:23:32 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.I0J5VX0H5b 00:42:21.582 03:23:32 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:21.582 03:23:32 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.I0J5VX0H5b 00:42:21.582 03:23:32 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:21.582 03:23:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:21.583 03:23:32 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:21.583 03:23:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:21.583 03:23:32 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.I0J5VX0H5b 00:42:21.583 03:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.I0J5VX0H5b 00:42:21.841 [2024-11-19 03:23:32.416101] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.I0J5VX0H5b': 0100660 00:42:21.841 [2024-11-19 03:23:32.416144] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:21.841 request: 00:42:21.841 { 00:42:21.841 "name": "key0", 00:42:21.841 "path": "/tmp/tmp.I0J5VX0H5b", 00:42:21.841 "method": "keyring_file_add_key", 00:42:21.841 "req_id": 1 00:42:21.841 } 00:42:21.841 Got JSON-RPC error response 00:42:21.841 response: 00:42:21.841 { 00:42:21.841 "code": -1, 00:42:21.841 "message": "Operation not permitted" 00:42:21.841 } 00:42:21.841 03:23:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:21.841 03:23:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:21.841 03:23:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:21.841 03:23:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:21.841 03:23:32 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.I0J5VX0H5b 00:42:21.841 03:23:32 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.I0J5VX0H5b 00:42:21.841 03:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.I0J5VX0H5b 00:42:22.098 03:23:32 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.I0J5VX0H5b 00:42:22.098 03:23:32 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:22.098 03:23:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:22.098 03:23:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:22.098 03:23:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:22.098 03:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:22.098 03:23:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:22.664 03:23:32 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:22.664 03:23:32 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:22.664 03:23:32 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:22.664 03:23:32 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:22.664 03:23:32 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:22.664 03:23:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:22.664 03:23:32 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:22.664 03:23:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:22.664 03:23:32 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:22.664 03:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:22.664 [2024-11-19 03:23:33.238337] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.I0J5VX0H5b': No such file or directory 00:42:22.664 [2024-11-19 03:23:33.238380] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:22.664 [2024-11-19 03:23:33.238418] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:22.664 [2024-11-19 03:23:33.238438] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:22.664 [2024-11-19 03:23:33.238452] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:22.664 [2024-11-19 03:23:33.238465] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:22.664 request: 00:42:22.664 { 00:42:22.664 "name": "nvme0", 00:42:22.664 "trtype": "tcp", 00:42:22.664 "traddr": "127.0.0.1", 00:42:22.664 "adrfam": "ipv4", 00:42:22.664 "trsvcid": "4420", 00:42:22.664 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:22.664 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:22.664 "prchk_reftag": false, 00:42:22.664 "prchk_guard": false, 00:42:22.664 "hdgst": false, 00:42:22.664 "ddgst": false, 00:42:22.664 "psk": "key0", 00:42:22.664 "allow_unrecognized_csi": false, 00:42:22.664 "method": "bdev_nvme_attach_controller", 00:42:22.664 "req_id": 1 00:42:22.664 } 00:42:22.664 Got JSON-RPC error response 00:42:22.664 response: 00:42:22.664 { 00:42:22.664 "code": -19, 00:42:22.664 "message": "No such device" 00:42:22.664 } 00:42:22.664 03:23:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:22.664 03:23:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:22.664 03:23:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:22.664 03:23:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:22.664 03:23:33 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:22.664 03:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:22.922 03:23:33 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:22.922 03:23:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:22.922 03:23:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:22.922 03:23:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:22.922 03:23:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:22.922 03:23:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:22.922 03:23:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uSBdaTfhjL 00:42:22.922 03:23:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:22.922 03:23:33 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:22.922 03:23:33 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:22.922 03:23:33 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:22.922 03:23:33 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:22.922 03:23:33 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:22.922 03:23:33 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:23.179 03:23:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uSBdaTfhjL 00:42:23.179 03:23:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uSBdaTfhjL 00:42:23.179 03:23:33 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.uSBdaTfhjL 00:42:23.179 03:23:33 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uSBdaTfhjL 00:42:23.179 03:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uSBdaTfhjL 00:42:23.437 03:23:33 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:23.437 03:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:23.693 nvme0n1 00:42:23.693 03:23:34 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:23.693 03:23:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:23.693 03:23:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:23.693 03:23:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:23.693 03:23:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:23.693 03:23:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:23.949 03:23:34 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:23.949 03:23:34 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:23.949 03:23:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:24.207 03:23:34 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:24.207 03:23:34 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:24.207 03:23:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:24.207 03:23:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:24.207 03:23:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:24.464 03:23:34 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:24.464 03:23:34 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:24.464 03:23:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:24.464 03:23:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:24.464 03:23:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:24.464 03:23:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:24.464 03:23:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:24.721 03:23:35 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:24.721 03:23:35 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:24.721 03:23:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:24.979 03:23:35 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:24.979 03:23:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:24.979 03:23:35 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:25.238 03:23:35 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:25.238 03:23:35 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uSBdaTfhjL 00:42:25.238 03:23:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uSBdaTfhjL 00:42:25.495 03:23:36 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.B3VkgSGMI4 00:42:25.495 03:23:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.B3VkgSGMI4 00:42:26.059 03:23:36 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:26.059 03:23:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:26.317 nvme0n1 00:42:26.317 03:23:36 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:26.317 03:23:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:26.575 03:23:37 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:26.576 "subsystems": [ 00:42:26.576 { 00:42:26.576 "subsystem": "keyring", 00:42:26.576 "config": [ 00:42:26.576 { 00:42:26.576 "method": "keyring_file_add_key", 00:42:26.576 "params": { 00:42:26.576 "name": "key0", 00:42:26.576 "path": "/tmp/tmp.uSBdaTfhjL" 00:42:26.576 } 00:42:26.576 }, 00:42:26.576 { 00:42:26.576 "method": "keyring_file_add_key", 00:42:26.576 "params": { 00:42:26.576 "name": "key1", 00:42:26.576 "path": "/tmp/tmp.B3VkgSGMI4" 00:42:26.576 } 00:42:26.576 } 00:42:26.576 ] 00:42:26.576 }, 00:42:26.576 { 00:42:26.576 "subsystem": "iobuf", 00:42:26.576 "config": [ 00:42:26.576 { 00:42:26.576 "method": "iobuf_set_options", 00:42:26.576 "params": { 00:42:26.576 "small_pool_count": 8192, 00:42:26.576 "large_pool_count": 1024, 00:42:26.576 "small_bufsize": 8192, 00:42:26.576 "large_bufsize": 135168, 00:42:26.576 "enable_numa": false 00:42:26.576 } 00:42:26.576 } 00:42:26.576 ] 00:42:26.576 }, 00:42:26.576 { 00:42:26.576 "subsystem": "sock", 00:42:26.576 "config": [ 00:42:26.576 { 00:42:26.576 "method": "sock_set_default_impl", 00:42:26.576 "params": { 00:42:26.576 "impl_name": "posix" 00:42:26.576 } 00:42:26.576 }, 00:42:26.576 { 00:42:26.576 "method": "sock_impl_set_options", 00:42:26.576 "params": { 00:42:26.576 "impl_name": "ssl", 00:42:26.576 "recv_buf_size": 4096, 00:42:26.576 "send_buf_size": 4096, 00:42:26.576 "enable_recv_pipe": true, 00:42:26.576 "enable_quickack": false, 00:42:26.576 "enable_placement_id": 0, 00:42:26.576 "enable_zerocopy_send_server": true, 00:42:26.576 "enable_zerocopy_send_client": false, 00:42:26.576 "zerocopy_threshold": 0, 00:42:26.576 "tls_version": 0, 00:42:26.576 "enable_ktls": false 00:42:26.576 } 00:42:26.576 }, 00:42:26.576 { 00:42:26.576 "method": "sock_impl_set_options", 00:42:26.576 "params": { 00:42:26.576 "impl_name": "posix", 00:42:26.576 "recv_buf_size": 2097152, 00:42:26.576 "send_buf_size": 2097152, 00:42:26.576 "enable_recv_pipe": true, 00:42:26.576 "enable_quickack": false, 00:42:26.576 "enable_placement_id": 0, 00:42:26.576 "enable_zerocopy_send_server": true, 00:42:26.576 "enable_zerocopy_send_client": false, 00:42:26.576 "zerocopy_threshold": 0, 00:42:26.576 "tls_version": 0, 00:42:26.576 "enable_ktls": false 00:42:26.576 } 00:42:26.576 } 00:42:26.576 ] 00:42:26.576 }, 00:42:26.576 { 00:42:26.576 "subsystem": "vmd", 00:42:26.576 "config": [] 00:42:26.576 }, 00:42:26.576 { 00:42:26.576 "subsystem": "accel", 00:42:26.576 "config": [ 00:42:26.576 { 00:42:26.576 "method": "accel_set_options", 00:42:26.576 "params": { 00:42:26.576 "small_cache_size": 128, 00:42:26.576 "large_cache_size": 16, 00:42:26.576 "task_count": 2048, 00:42:26.576 "sequence_count": 2048, 00:42:26.576 "buf_count": 2048 00:42:26.576 } 00:42:26.576 } 00:42:26.576 ] 00:42:26.576 }, 00:42:26.576 { 00:42:26.576 "subsystem": "bdev", 00:42:26.576 "config": [ 00:42:26.576 { 00:42:26.576 "method": "bdev_set_options", 00:42:26.576 "params": { 00:42:26.576 "bdev_io_pool_size": 65535, 00:42:26.576 "bdev_io_cache_size": 256, 00:42:26.576 "bdev_auto_examine": true, 00:42:26.576 "iobuf_small_cache_size": 128, 00:42:26.576 "iobuf_large_cache_size": 16 00:42:26.576 } 00:42:26.576 }, 00:42:26.576 { 00:42:26.576 "method": "bdev_raid_set_options", 00:42:26.576 "params": { 00:42:26.576 "process_window_size_kb": 1024, 00:42:26.576 "process_max_bandwidth_mb_sec": 0 00:42:26.576 } 00:42:26.576 }, 00:42:26.576 { 00:42:26.576 "method": "bdev_iscsi_set_options", 00:42:26.576 "params": { 00:42:26.576 "timeout_sec": 30 00:42:26.576 } 00:42:26.576 }, 00:42:26.576 { 00:42:26.576 "method": "bdev_nvme_set_options", 00:42:26.576 "params": { 00:42:26.576 "action_on_timeout": "none", 00:42:26.576 "timeout_us": 0, 00:42:26.576 "timeout_admin_us": 0, 00:42:26.576 "keep_alive_timeout_ms": 10000, 00:42:26.576 "arbitration_burst": 0, 00:42:26.576 "low_priority_weight": 0, 00:42:26.576 "medium_priority_weight": 0, 00:42:26.576 "high_priority_weight": 0, 00:42:26.576 "nvme_adminq_poll_period_us": 10000, 00:42:26.576 "nvme_ioq_poll_period_us": 0, 00:42:26.576 "io_queue_requests": 512, 00:42:26.576 "delay_cmd_submit": true, 00:42:26.576 "transport_retry_count": 4, 00:42:26.576 "bdev_retry_count": 3, 00:42:26.576 "transport_ack_timeout": 0, 00:42:26.576 "ctrlr_loss_timeout_sec": 0, 00:42:26.576 "reconnect_delay_sec": 0, 00:42:26.576 "fast_io_fail_timeout_sec": 0, 00:42:26.576 "disable_auto_failback": false, 00:42:26.576 "generate_uuids": false, 00:42:26.576 "transport_tos": 0, 00:42:26.576 "nvme_error_stat": false, 00:42:26.576 "rdma_srq_size": 0, 00:42:26.576 "io_path_stat": false, 00:42:26.576 "allow_accel_sequence": false, 00:42:26.576 "rdma_max_cq_size": 0, 00:42:26.576 "rdma_cm_event_timeout_ms": 0, 00:42:26.576 "dhchap_digests": [ 00:42:26.576 "sha256", 00:42:26.576 "sha384", 00:42:26.576 "sha512" 00:42:26.576 ], 00:42:26.576 "dhchap_dhgroups": [ 00:42:26.576 "null", 00:42:26.576 "ffdhe2048", 00:42:26.576 "ffdhe3072", 00:42:26.576 "ffdhe4096", 00:42:26.576 "ffdhe6144", 00:42:26.576 "ffdhe8192" 00:42:26.576 ] 00:42:26.576 } 00:42:26.576 }, 00:42:26.576 { 00:42:26.576 "method": "bdev_nvme_attach_controller", 00:42:26.576 "params": { 00:42:26.576 "name": "nvme0", 00:42:26.576 "trtype": "TCP", 00:42:26.576 "adrfam": "IPv4", 00:42:26.576 "traddr": "127.0.0.1", 00:42:26.576 "trsvcid": "4420", 00:42:26.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:26.576 "prchk_reftag": false, 00:42:26.576 "prchk_guard": false, 00:42:26.577 "ctrlr_loss_timeout_sec": 0, 00:42:26.577 "reconnect_delay_sec": 0, 00:42:26.577 "fast_io_fail_timeout_sec": 0, 00:42:26.577 "psk": "key0", 00:42:26.577 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:26.577 "hdgst": false, 00:42:26.577 "ddgst": false, 00:42:26.577 "multipath": "multipath" 00:42:26.577 } 00:42:26.577 }, 00:42:26.577 { 00:42:26.577 "method": "bdev_nvme_set_hotplug", 00:42:26.577 "params": { 00:42:26.577 "period_us": 100000, 00:42:26.577 "enable": false 00:42:26.577 } 00:42:26.577 }, 00:42:26.577 { 00:42:26.577 "method": "bdev_wait_for_examine" 00:42:26.577 } 00:42:26.577 ] 00:42:26.577 }, 00:42:26.577 { 00:42:26.577 "subsystem": "nbd", 00:42:26.577 "config": [] 00:42:26.577 } 00:42:26.577 ] 00:42:26.577 }' 00:42:26.577 03:23:37 keyring_file -- keyring/file.sh@115 -- # killprocess 481709 00:42:26.577 03:23:37 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 481709 ']' 00:42:26.577 03:23:37 keyring_file -- common/autotest_common.sh@958 -- # kill -0 481709 00:42:26.577 03:23:37 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:26.577 03:23:37 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:26.577 03:23:37 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 481709 00:42:26.577 03:23:37 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:26.577 03:23:37 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:26.577 03:23:37 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 481709' 00:42:26.577 killing process with pid 481709 00:42:26.577 03:23:37 keyring_file -- common/autotest_common.sh@973 -- # kill 481709 00:42:26.577 Received shutdown signal, test time was about 1.000000 seconds 00:42:26.577 00:42:26.577 Latency(us) 00:42:26.577 [2024-11-19T02:23:37.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:26.577 [2024-11-19T02:23:37.192Z] =================================================================================================================== 00:42:26.577 [2024-11-19T02:23:37.192Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:26.577 03:23:37 keyring_file -- common/autotest_common.sh@978 -- # wait 481709 00:42:26.836 03:23:37 keyring_file -- keyring/file.sh@118 -- # bperfpid=483167 00:42:26.836 03:23:37 keyring_file -- keyring/file.sh@120 -- # waitforlisten 483167 /var/tmp/bperf.sock 00:42:26.836 03:23:37 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 483167 ']' 00:42:26.836 03:23:37 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:26.836 03:23:37 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:26.836 03:23:37 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:26.836 03:23:37 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:26.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:26.836 03:23:37 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:26.836 "subsystems": [ 00:42:26.836 { 00:42:26.836 "subsystem": "keyring", 00:42:26.836 "config": [ 00:42:26.836 { 00:42:26.836 "method": "keyring_file_add_key", 00:42:26.836 "params": { 00:42:26.836 "name": "key0", 00:42:26.836 "path": "/tmp/tmp.uSBdaTfhjL" 00:42:26.836 } 00:42:26.836 }, 00:42:26.836 { 00:42:26.836 "method": "keyring_file_add_key", 00:42:26.836 "params": { 00:42:26.836 "name": "key1", 00:42:26.836 "path": "/tmp/tmp.B3VkgSGMI4" 00:42:26.836 } 00:42:26.836 } 00:42:26.836 ] 00:42:26.836 }, 00:42:26.836 { 00:42:26.836 "subsystem": "iobuf", 00:42:26.836 "config": [ 00:42:26.836 { 00:42:26.836 "method": "iobuf_set_options", 00:42:26.836 "params": { 00:42:26.836 "small_pool_count": 8192, 00:42:26.836 "large_pool_count": 1024, 00:42:26.836 "small_bufsize": 8192, 00:42:26.836 "large_bufsize": 135168, 00:42:26.836 "enable_numa": false 00:42:26.836 } 00:42:26.836 } 00:42:26.836 ] 00:42:26.836 }, 00:42:26.836 { 00:42:26.836 "subsystem": "sock", 00:42:26.836 "config": [ 00:42:26.836 { 00:42:26.836 "method": "sock_set_default_impl", 00:42:26.836 "params": { 00:42:26.836 "impl_name": "posix" 00:42:26.836 } 00:42:26.836 }, 00:42:26.836 { 00:42:26.836 "method": "sock_impl_set_options", 00:42:26.836 "params": { 00:42:26.836 "impl_name": "ssl", 00:42:26.836 "recv_buf_size": 4096, 00:42:26.836 "send_buf_size": 4096, 00:42:26.836 "enable_recv_pipe": true, 00:42:26.836 "enable_quickack": false, 00:42:26.836 "enable_placement_id": 0, 00:42:26.836 "enable_zerocopy_send_server": true, 00:42:26.836 "enable_zerocopy_send_client": false, 00:42:26.836 "zerocopy_threshold": 0, 00:42:26.836 "tls_version": 0, 00:42:26.836 "enable_ktls": false 00:42:26.836 } 00:42:26.836 }, 00:42:26.836 { 00:42:26.836 "method": "sock_impl_set_options", 00:42:26.836 "params": { 00:42:26.836 "impl_name": "posix", 00:42:26.836 "recv_buf_size": 2097152, 00:42:26.836 "send_buf_size": 2097152, 00:42:26.836 "enable_recv_pipe": true, 00:42:26.836 "enable_quickack": false, 00:42:26.836 "enable_placement_id": 0, 00:42:26.836 "enable_zerocopy_send_server": true, 00:42:26.836 "enable_zerocopy_send_client": false, 00:42:26.836 "zerocopy_threshold": 0, 00:42:26.836 "tls_version": 0, 00:42:26.836 "enable_ktls": false 00:42:26.836 } 00:42:26.836 } 00:42:26.836 ] 00:42:26.836 }, 00:42:26.836 { 00:42:26.836 "subsystem": "vmd", 00:42:26.836 "config": [] 00:42:26.836 }, 00:42:26.836 { 00:42:26.836 "subsystem": "accel", 00:42:26.836 "config": [ 00:42:26.836 { 00:42:26.836 "method": "accel_set_options", 00:42:26.836 "params": { 00:42:26.836 "small_cache_size": 128, 00:42:26.836 "large_cache_size": 16, 00:42:26.836 "task_count": 2048, 00:42:26.836 "sequence_count": 2048, 00:42:26.836 "buf_count": 2048 00:42:26.836 } 00:42:26.836 } 00:42:26.836 ] 00:42:26.836 }, 00:42:26.836 { 00:42:26.836 "subsystem": "bdev", 00:42:26.836 "config": [ 00:42:26.836 { 00:42:26.836 "method": "bdev_set_options", 00:42:26.836 "params": { 00:42:26.836 "bdev_io_pool_size": 65535, 00:42:26.836 "bdev_io_cache_size": 256, 00:42:26.836 "bdev_auto_examine": true, 00:42:26.836 "iobuf_small_cache_size": 128, 00:42:26.836 "iobuf_large_cache_size": 16 00:42:26.836 } 00:42:26.836 }, 00:42:26.836 { 00:42:26.836 "method": "bdev_raid_set_options", 00:42:26.836 "params": { 00:42:26.836 "process_window_size_kb": 1024, 00:42:26.836 "process_max_bandwidth_mb_sec": 0 00:42:26.836 } 00:42:26.836 }, 00:42:26.836 { 00:42:26.836 "method": "bdev_iscsi_set_options", 00:42:26.836 "params": { 00:42:26.836 "timeout_sec": 30 00:42:26.836 } 00:42:26.836 }, 00:42:26.836 { 00:42:26.836 "method": "bdev_nvme_set_options", 00:42:26.836 "params": { 00:42:26.836 "action_on_timeout": "none", 00:42:26.836 "timeout_us": 0, 00:42:26.836 "timeout_admin_us": 0, 00:42:26.836 "keep_alive_timeout_ms": 10000, 00:42:26.836 "arbitration_burst": 0, 00:42:26.836 "low_priority_weight": 0, 00:42:26.836 "medium_priority_weight": 0, 00:42:26.836 "high_priority_weight": 0, 00:42:26.836 "nvme_adminq_poll_period_us": 10000, 00:42:26.836 "nvme_ioq_poll_period_us": 0, 00:42:26.836 "io_queue_requests": 512, 00:42:26.836 "delay_cmd_submit": true, 00:42:26.836 "transport_retry_count": 4, 00:42:26.836 "bdev_retry_count": 3, 00:42:26.836 "transport_ack_timeout": 0, 00:42:26.836 "ctrlr_loss_timeout_sec": 0, 00:42:26.836 "reconnect_delay_sec": 0, 00:42:26.836 "fast_io_fail_timeout_sec": 0, 00:42:26.836 "disable_auto_failback": false, 00:42:26.836 "generate_uuids": false, 00:42:26.836 "transport_tos": 0, 00:42:26.836 "nvme_error_stat": false, 00:42:26.836 "rdma_srq_size": 0, 00:42:26.836 "io_path_stat": false, 00:42:26.836 "allow_accel_sequence": false, 00:42:26.836 "rdma_max_cq_size": 0, 00:42:26.836 "rdma_cm_event_timeout_ms": 0, 00:42:26.836 "dhchap_digests": [ 00:42:26.837 "sha256", 00:42:26.837 "sha384", 00:42:26.837 "sha512" 00:42:26.837 ], 00:42:26.837 "dhchap_dhgroups": [ 00:42:26.837 "null", 00:42:26.837 "ffdhe2048", 00:42:26.837 "ffdhe3072", 00:42:26.837 "ffdhe4096", 00:42:26.837 "ffdhe6144", 00:42:26.837 "ffdhe8192" 00:42:26.837 ] 00:42:26.837 } 00:42:26.837 }, 00:42:26.837 { 00:42:26.837 "method": "bdev_nvme_attach_controller", 00:42:26.837 "params": { 00:42:26.837 "name": "nvme0", 00:42:26.837 "trtype": "TCP", 00:42:26.837 "adrfam": "IPv4", 00:42:26.837 "traddr": "127.0.0.1", 00:42:26.837 "trsvcid": "4420", 00:42:26.837 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:26.837 "prchk_reftag": false, 00:42:26.837 "prchk_guard": false, 00:42:26.837 "ctrlr_loss_timeout_sec": 0, 00:42:26.837 "reconnect_delay_sec": 0, 00:42:26.837 "fast_io_fail_timeout_sec": 0, 00:42:26.837 "psk": "key0", 00:42:26.837 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:26.837 "hdgst": false, 00:42:26.837 "ddgst": false, 00:42:26.837 "multipath": "multipath" 00:42:26.837 } 00:42:26.837 }, 00:42:26.837 { 00:42:26.837 "method": "bdev_nvme_set_hotplug", 00:42:26.837 "params": { 00:42:26.837 "period_us": 100000, 00:42:26.837 "enable": false 00:42:26.837 } 00:42:26.837 }, 00:42:26.837 { 00:42:26.837 "method": "bdev_wait_for_examine" 00:42:26.837 } 00:42:26.837 ] 00:42:26.837 }, 00:42:26.837 { 00:42:26.837 "subsystem": "nbd", 00:42:26.837 "config": [] 00:42:26.837 } 00:42:26.837 ] 00:42:26.837 }' 00:42:26.837 03:23:37 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:26.837 03:23:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:26.837 [2024-11-19 03:23:37.324291] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:42:26.837 [2024-11-19 03:23:37.324370] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483167 ] 00:42:26.837 [2024-11-19 03:23:37.391052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:26.837 [2024-11-19 03:23:37.441092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:27.095 [2024-11-19 03:23:37.625589] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:27.354 03:23:37 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:27.354 03:23:37 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:27.354 03:23:37 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:42:27.354 03:23:37 keyring_file -- keyring/file.sh@121 -- # jq length 00:42:27.354 03:23:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:27.612 03:23:38 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:27.612 03:23:38 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:42:27.612 03:23:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:27.612 03:23:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:27.612 03:23:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:27.612 03:23:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:27.612 03:23:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:27.869 03:23:38 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:42:27.869 03:23:38 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:42:27.870 03:23:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:27.870 03:23:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:27.870 03:23:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:27.870 03:23:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:27.870 03:23:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:28.127 03:23:38 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:42:28.127 03:23:38 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:42:28.127 03:23:38 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:42:28.127 03:23:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:28.388 03:23:38 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:42:28.388 03:23:38 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:28.388 03:23:38 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.uSBdaTfhjL /tmp/tmp.B3VkgSGMI4 00:42:28.388 03:23:38 keyring_file -- keyring/file.sh@20 -- # killprocess 483167 00:42:28.388 03:23:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 483167 ']' 00:42:28.388 03:23:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 483167 00:42:28.388 03:23:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:28.388 03:23:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:28.388 03:23:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483167 00:42:28.388 03:23:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:28.388 03:23:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:28.388 03:23:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483167' 00:42:28.388 killing process with pid 483167 00:42:28.388 03:23:38 keyring_file -- common/autotest_common.sh@973 -- # kill 483167 00:42:28.388 Received shutdown signal, test time was about 1.000000 seconds 00:42:28.388 00:42:28.388 Latency(us) 00:42:28.388 [2024-11-19T02:23:39.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:28.388 [2024-11-19T02:23:39.003Z] =================================================================================================================== 00:42:28.388 [2024-11-19T02:23:39.003Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:28.388 03:23:38 keyring_file -- common/autotest_common.sh@978 -- # wait 483167 00:42:28.687 03:23:39 keyring_file -- keyring/file.sh@21 -- # killprocess 481692 00:42:28.687 03:23:39 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 481692 ']' 00:42:28.687 03:23:39 keyring_file -- common/autotest_common.sh@958 -- # kill -0 481692 00:42:28.687 03:23:39 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:28.687 03:23:39 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:28.687 03:23:39 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 481692 00:42:28.687 03:23:39 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:28.687 03:23:39 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:28.687 03:23:39 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 481692' 00:42:28.687 killing process with pid 481692 00:42:28.687 03:23:39 keyring_file -- common/autotest_common.sh@973 -- # kill 481692 00:42:28.687 03:23:39 keyring_file -- common/autotest_common.sh@978 -- # wait 481692 00:42:29.002 00:42:29.002 real 0m14.454s 00:42:29.002 user 0m37.099s 00:42:29.002 sys 0m3.144s 00:42:29.002 03:23:39 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:29.002 03:23:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:29.002 ************************************ 00:42:29.002 END TEST keyring_file 00:42:29.002 ************************************ 00:42:29.002 03:23:39 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:42:29.002 03:23:39 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:29.002 03:23:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:29.002 03:23:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:29.002 03:23:39 -- common/autotest_common.sh@10 -- # set +x 00:42:29.002 ************************************ 00:42:29.002 START TEST keyring_linux 00:42:29.002 ************************************ 00:42:29.002 03:23:39 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:29.002 Joined session keyring: 232941246 00:42:29.002 * Looking for test storage... 00:42:29.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:29.002 03:23:39 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:29.002 03:23:39 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:42:29.002 03:23:39 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:29.285 03:23:39 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@345 -- # : 1 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:29.285 03:23:39 keyring_linux -- scripts/common.sh@368 -- # return 0 00:42:29.285 03:23:39 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:29.285 03:23:39 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:29.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:29.285 --rc genhtml_branch_coverage=1 00:42:29.285 --rc genhtml_function_coverage=1 00:42:29.285 --rc genhtml_legend=1 00:42:29.285 --rc geninfo_all_blocks=1 00:42:29.285 --rc geninfo_unexecuted_blocks=1 00:42:29.285 00:42:29.285 ' 00:42:29.285 03:23:39 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:29.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:29.285 --rc genhtml_branch_coverage=1 00:42:29.285 --rc genhtml_function_coverage=1 00:42:29.285 --rc genhtml_legend=1 00:42:29.285 --rc geninfo_all_blocks=1 00:42:29.285 --rc geninfo_unexecuted_blocks=1 00:42:29.285 00:42:29.285 ' 00:42:29.285 03:23:39 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:29.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:29.285 --rc genhtml_branch_coverage=1 00:42:29.285 --rc genhtml_function_coverage=1 00:42:29.285 --rc genhtml_legend=1 00:42:29.285 --rc geninfo_all_blocks=1 00:42:29.285 --rc geninfo_unexecuted_blocks=1 00:42:29.285 00:42:29.285 ' 00:42:29.285 03:23:39 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:29.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:29.285 --rc genhtml_branch_coverage=1 00:42:29.285 --rc genhtml_function_coverage=1 00:42:29.285 --rc genhtml_legend=1 00:42:29.285 --rc geninfo_all_blocks=1 00:42:29.286 --rc geninfo_unexecuted_blocks=1 00:42:29.286 00:42:29.286 ' 00:42:29.286 03:23:39 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:29.286 03:23:39 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:42:29.286 03:23:39 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:29.286 03:23:39 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:29.286 03:23:39 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:29.286 03:23:39 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:29.286 03:23:39 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:29.286 03:23:39 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:29.286 03:23:39 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:29.286 03:23:39 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:29.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:29.286 03:23:39 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:29.286 03:23:39 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:29.286 03:23:39 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:29.286 03:23:39 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:29.286 03:23:39 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:29.286 03:23:39 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:29.286 /tmp/:spdk-test:key0 00:42:29.286 03:23:39 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:29.286 03:23:39 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:29.286 03:23:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:29.286 /tmp/:spdk-test:key1 00:42:29.286 03:23:39 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=483650 00:42:29.286 03:23:39 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:29.286 03:23:39 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 483650 00:42:29.286 03:23:39 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 483650 ']' 00:42:29.286 03:23:39 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:29.286 03:23:39 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:29.286 03:23:39 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:29.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:29.286 03:23:39 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:29.286 03:23:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:29.286 [2024-11-19 03:23:39.818448] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:42:29.286 [2024-11-19 03:23:39.818544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483650 ] 00:42:29.286 [2024-11-19 03:23:39.887341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:29.544 [2024-11-19 03:23:39.936115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:29.803 03:23:40 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:29.803 03:23:40 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:42:29.803 03:23:40 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:29.803 03:23:40 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.803 03:23:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:29.803 [2024-11-19 03:23:40.200517] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:29.803 null0 00:42:29.803 [2024-11-19 03:23:40.232575] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:29.803 [2024-11-19 03:23:40.233125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:29.803 03:23:40 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.803 03:23:40 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:29.803 285381781 00:42:29.803 03:23:40 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:29.803 738106279 00:42:29.803 03:23:40 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=483667 00:42:29.803 03:23:40 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 483667 /var/tmp/bperf.sock 00:42:29.803 03:23:40 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:29.803 03:23:40 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 483667 ']' 00:42:29.803 03:23:40 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:29.803 03:23:40 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:29.803 03:23:40 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:29.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:29.803 03:23:40 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:29.803 03:23:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:29.803 [2024-11-19 03:23:40.303895] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:42:29.803 [2024-11-19 03:23:40.303969] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483667 ] 00:42:29.803 [2024-11-19 03:23:40.373431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:30.061 [2024-11-19 03:23:40.424624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:30.061 03:23:40 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:30.061 03:23:40 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:42:30.061 03:23:40 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:30.061 03:23:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:30.320 03:23:40 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:30.320 03:23:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:30.578 03:23:41 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:30.578 03:23:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:30.835 [2024-11-19 03:23:41.406758] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:31.092 nvme0n1 00:42:31.092 03:23:41 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:31.092 03:23:41 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:31.092 03:23:41 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:31.092 03:23:41 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:31.093 03:23:41 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:31.093 03:23:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:31.350 03:23:41 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:31.350 03:23:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:31.350 03:23:41 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:31.350 03:23:41 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:31.350 03:23:41 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:31.350 03:23:41 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:31.350 03:23:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:31.609 03:23:42 keyring_linux -- keyring/linux.sh@25 -- # sn=285381781 00:42:31.609 03:23:42 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:31.609 03:23:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:31.609 03:23:42 keyring_linux -- keyring/linux.sh@26 -- # [[ 285381781 == \2\8\5\3\8\1\7\8\1 ]] 00:42:31.609 03:23:42 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 285381781 00:42:31.609 03:23:42 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:31.609 03:23:42 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:31.609 Running I/O for 1 seconds... 00:42:32.800 11156.00 IOPS, 43.58 MiB/s 00:42:32.800 Latency(us) 00:42:32.800 [2024-11-19T02:23:43.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:32.800 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:32.800 nvme0n1 : 1.01 11144.80 43.53 0.00 0.00 11409.89 3131.16 14369.37 00:42:32.800 [2024-11-19T02:23:43.415Z] =================================================================================================================== 00:42:32.800 [2024-11-19T02:23:43.415Z] Total : 11144.80 43.53 0.00 0.00 11409.89 3131.16 14369.37 00:42:32.800 { 00:42:32.800 "results": [ 00:42:32.800 { 00:42:32.800 "job": "nvme0n1", 00:42:32.800 "core_mask": "0x2", 00:42:32.800 "workload": "randread", 00:42:32.800 "status": "finished", 00:42:32.800 "queue_depth": 128, 00:42:32.800 "io_size": 4096, 00:42:32.800 "runtime": 1.01258, 00:42:32.800 "iops": 11144.798435679157, 00:42:32.800 "mibps": 43.534368889371706, 00:42:32.800 "io_failed": 0, 00:42:32.800 "io_timeout": 0, 00:42:32.800 "avg_latency_us": 11409.887610889578, 00:42:32.801 "min_latency_us": 3131.1644444444446, 00:42:32.801 "max_latency_us": 14369.374814814815 00:42:32.801 } 00:42:32.801 ], 00:42:32.801 "core_count": 1 00:42:32.801 } 00:42:32.801 03:23:43 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:32.801 03:23:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:33.058 03:23:43 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:33.058 03:23:43 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:33.058 03:23:43 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:33.058 03:23:43 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:33.058 03:23:43 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:33.058 03:23:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:33.315 03:23:43 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:33.315 03:23:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:33.315 03:23:43 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:33.315 03:23:43 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:33.316 03:23:43 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:42:33.316 03:23:43 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:33.316 03:23:43 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:33.316 03:23:43 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:33.316 03:23:43 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:33.316 03:23:43 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:33.316 03:23:43 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:33.316 03:23:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:33.574 [2024-11-19 03:23:44.006214] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:33.574 [2024-11-19 03:23:44.006654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb0900 (107): Transport endpoint is not connected 00:42:33.574 [2024-11-19 03:23:44.007646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb0900 (9): Bad file descriptor 00:42:33.574 [2024-11-19 03:23:44.008646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:33.574 [2024-11-19 03:23:44.008666] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:33.574 [2024-11-19 03:23:44.008699] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:33.574 [2024-11-19 03:23:44.008716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:33.574 request: 00:42:33.574 { 00:42:33.574 "name": "nvme0", 00:42:33.574 "trtype": "tcp", 00:42:33.574 "traddr": "127.0.0.1", 00:42:33.574 "adrfam": "ipv4", 00:42:33.574 "trsvcid": "4420", 00:42:33.574 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:33.574 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:33.574 "prchk_reftag": false, 00:42:33.574 "prchk_guard": false, 00:42:33.574 "hdgst": false, 00:42:33.574 "ddgst": false, 00:42:33.574 "psk": ":spdk-test:key1", 00:42:33.574 "allow_unrecognized_csi": false, 00:42:33.574 "method": "bdev_nvme_attach_controller", 00:42:33.574 "req_id": 1 00:42:33.574 } 00:42:33.574 Got JSON-RPC error response 00:42:33.574 response: 00:42:33.574 { 00:42:33.574 "code": -5, 00:42:33.574 "message": "Input/output error" 00:42:33.574 } 00:42:33.574 03:23:44 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:42:33.574 03:23:44 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:33.574 03:23:44 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:33.574 03:23:44 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:33.574 03:23:44 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:33.574 03:23:44 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:33.574 03:23:44 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:33.574 03:23:44 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:33.574 03:23:44 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:33.574 03:23:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:33.574 03:23:44 keyring_linux -- keyring/linux.sh@33 -- # sn=285381781 00:42:33.574 03:23:44 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 285381781 00:42:33.574 1 links removed 00:42:33.574 03:23:44 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:33.574 03:23:44 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:33.574 03:23:44 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:33.574 03:23:44 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:33.574 03:23:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:33.574 03:23:44 keyring_linux -- keyring/linux.sh@33 -- # sn=738106279 00:42:33.574 03:23:44 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 738106279 00:42:33.574 1 links removed 00:42:33.574 03:23:44 keyring_linux -- keyring/linux.sh@41 -- # killprocess 483667 00:42:33.574 03:23:44 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 483667 ']' 00:42:33.574 03:23:44 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 483667 00:42:33.574 03:23:44 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:42:33.574 03:23:44 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:33.574 03:23:44 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483667 00:42:33.574 03:23:44 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:33.574 03:23:44 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:33.574 03:23:44 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483667' 00:42:33.574 killing process with pid 483667 00:42:33.574 03:23:44 keyring_linux -- common/autotest_common.sh@973 -- # kill 483667 00:42:33.574 Received shutdown signal, test time was about 1.000000 seconds 00:42:33.574 00:42:33.574 Latency(us) 00:42:33.574 [2024-11-19T02:23:44.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:33.574 [2024-11-19T02:23:44.189Z] =================================================================================================================== 00:42:33.574 [2024-11-19T02:23:44.189Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:33.574 03:23:44 keyring_linux -- common/autotest_common.sh@978 -- # wait 483667 00:42:33.832 03:23:44 keyring_linux -- keyring/linux.sh@42 -- # killprocess 483650 00:42:33.832 03:23:44 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 483650 ']' 00:42:33.832 03:23:44 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 483650 00:42:33.832 03:23:44 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:42:33.832 03:23:44 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:33.832 03:23:44 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483650 00:42:33.832 03:23:44 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:33.832 03:23:44 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:33.832 03:23:44 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483650' 00:42:33.832 killing process with pid 483650 00:42:33.832 03:23:44 keyring_linux -- common/autotest_common.sh@973 -- # kill 483650 00:42:33.832 03:23:44 keyring_linux -- common/autotest_common.sh@978 -- # wait 483650 00:42:34.091 00:42:34.091 real 0m5.104s 00:42:34.091 user 0m10.141s 00:42:34.091 sys 0m1.665s 00:42:34.091 03:23:44 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:34.091 03:23:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:34.091 ************************************ 00:42:34.091 END TEST keyring_linux 00:42:34.091 ************************************ 00:42:34.091 03:23:44 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:34.091 03:23:44 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:34.091 03:23:44 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:42:34.091 03:23:44 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:42:34.091 03:23:44 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:42:34.091 03:23:44 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:34.091 03:23:44 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:42:34.091 03:23:44 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:34.091 03:23:44 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:42:34.091 03:23:44 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:34.091 03:23:44 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:42:34.091 03:23:44 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:34.091 03:23:44 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:34.091 03:23:44 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:42:34.091 03:23:44 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:42:34.091 03:23:44 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:42:34.091 03:23:44 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:42:34.091 03:23:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:34.091 03:23:44 -- common/autotest_common.sh@10 -- # set +x 00:42:34.091 03:23:44 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:42:34.091 03:23:44 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:42:34.091 03:23:44 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:42:34.091 03:23:44 -- common/autotest_common.sh@10 -- # set +x 00:42:36.626 INFO: APP EXITING 00:42:36.626 INFO: killing all VMs 00:42:36.626 INFO: killing vhost app 00:42:36.626 INFO: EXIT DONE 00:42:37.562 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:42:37.562 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:42:37.562 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:42:37.562 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:42:37.562 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:42:37.562 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:42:37.562 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:42:37.562 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:42:37.562 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:42:37.562 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:42:37.562 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:42:37.562 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:42:37.562 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:42:37.562 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:42:37.562 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:42:37.562 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:42:37.562 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:42:38.937 Cleaning 00:42:38.937 Removing: /var/run/dpdk/spdk0/config 00:42:38.937 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:38.937 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:38.937 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:38.937 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:38.937 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:38.937 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:38.937 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:38.937 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:38.937 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:38.937 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:38.937 Removing: /var/run/dpdk/spdk1/config 00:42:38.937 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:38.937 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:38.937 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:38.937 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:38.937 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:38.937 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:38.937 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:38.937 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:38.937 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:38.937 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:38.937 Removing: /var/run/dpdk/spdk2/config 00:42:38.937 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:38.937 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:38.938 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:38.938 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:38.938 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:38.938 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:38.938 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:38.938 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:38.938 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:38.938 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:38.938 Removing: /var/run/dpdk/spdk3/config 00:42:38.938 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:38.938 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:38.938 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:38.938 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:38.938 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:38.938 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:38.938 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:38.938 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:38.938 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:38.938 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:38.938 Removing: /var/run/dpdk/spdk4/config 00:42:38.938 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:38.938 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:38.938 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:38.938 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:38.938 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:38.938 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:38.938 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:38.938 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:38.938 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:38.938 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:38.938 Removing: /dev/shm/bdev_svc_trace.1 00:42:38.938 Removing: /dev/shm/nvmf_trace.0 00:42:38.938 Removing: /dev/shm/spdk_tgt_trace.pid99548 00:42:38.938 Removing: /var/run/dpdk/spdk0 00:42:38.938 Removing: /var/run/dpdk/spdk1 00:42:38.938 Removing: /var/run/dpdk/spdk2 00:42:38.938 Removing: /var/run/dpdk/spdk3 00:42:38.938 Removing: /var/run/dpdk/spdk4 00:42:38.938 Removing: /var/run/dpdk/spdk_pid100619 00:42:38.938 Removing: /var/run/dpdk/spdk_pid100753 00:42:38.938 Removing: /var/run/dpdk/spdk_pid101476 00:42:38.938 Removing: /var/run/dpdk/spdk_pid101487 00:42:38.938 Removing: /var/run/dpdk/spdk_pid101745 00:42:38.938 Removing: /var/run/dpdk/spdk_pid103065 00:42:38.938 Removing: /var/run/dpdk/spdk_pid103980 00:42:38.938 Removing: /var/run/dpdk/spdk_pid104181 00:42:38.938 Removing: /var/run/dpdk/spdk_pid104448 00:42:38.938 Removing: /var/run/dpdk/spdk_pid104711 00:42:38.938 Removing: /var/run/dpdk/spdk_pid104909 00:42:38.938 Removing: /var/run/dpdk/spdk_pid105065 00:42:38.938 Removing: /var/run/dpdk/spdk_pid105217 00:42:38.938 Removing: /var/run/dpdk/spdk_pid105409 00:42:38.938 Removing: /var/run/dpdk/spdk_pid105719 00:42:38.938 Removing: /var/run/dpdk/spdk_pid108183 00:42:38.938 Removing: /var/run/dpdk/spdk_pid108325 00:42:38.938 Removing: /var/run/dpdk/spdk_pid108536 00:42:38.938 Removing: /var/run/dpdk/spdk_pid108551 00:42:38.938 Removing: /var/run/dpdk/spdk_pid108847 00:42:38.938 Removing: /var/run/dpdk/spdk_pid108852 00:42:38.938 Removing: /var/run/dpdk/spdk_pid109274 00:42:38.938 Removing: /var/run/dpdk/spdk_pid109284 00:42:38.938 Removing: /var/run/dpdk/spdk_pid109452 00:42:38.938 Removing: /var/run/dpdk/spdk_pid109577 00:42:38.938 Removing: /var/run/dpdk/spdk_pid109745 00:42:38.938 Removing: /var/run/dpdk/spdk_pid109757 00:42:38.938 Removing: /var/run/dpdk/spdk_pid110255 00:42:38.938 Removing: /var/run/dpdk/spdk_pid110407 00:42:38.938 Removing: /var/run/dpdk/spdk_pid110613 00:42:38.938 Removing: /var/run/dpdk/spdk_pid112840 00:42:38.938 Removing: /var/run/dpdk/spdk_pid115979 00:42:38.938 Removing: /var/run/dpdk/spdk_pid122976 00:42:38.938 Removing: /var/run/dpdk/spdk_pid123401 00:42:38.938 Removing: /var/run/dpdk/spdk_pid125924 00:42:38.938 Removing: /var/run/dpdk/spdk_pid126200 00:42:38.938 Removing: /var/run/dpdk/spdk_pid128722 00:42:38.938 Removing: /var/run/dpdk/spdk_pid132468 00:42:38.938 Removing: /var/run/dpdk/spdk_pid134646 00:42:38.938 Removing: /var/run/dpdk/spdk_pid141061 00:42:38.938 Removing: /var/run/dpdk/spdk_pid146300 00:42:38.938 Removing: /var/run/dpdk/spdk_pid147684 00:42:38.938 Removing: /var/run/dpdk/spdk_pid148451 00:42:38.938 Removing: /var/run/dpdk/spdk_pid159338 00:42:38.938 Removing: /var/run/dpdk/spdk_pid161511 00:42:38.938 Removing: /var/run/dpdk/spdk_pid217177 00:42:38.938 Removing: /var/run/dpdk/spdk_pid220465 00:42:38.938 Removing: /var/run/dpdk/spdk_pid224291 00:42:38.938 Removing: /var/run/dpdk/spdk_pid228553 00:42:38.938 Removing: /var/run/dpdk/spdk_pid228566 00:42:38.938 Removing: /var/run/dpdk/spdk_pid229213 00:42:38.938 Removing: /var/run/dpdk/spdk_pid229867 00:42:38.938 Removing: /var/run/dpdk/spdk_pid230402 00:42:38.938 Removing: /var/run/dpdk/spdk_pid230818 00:42:38.938 Removing: /var/run/dpdk/spdk_pid230900 00:42:38.938 Removing: /var/run/dpdk/spdk_pid231080 00:42:38.938 Removing: /var/run/dpdk/spdk_pid231221 00:42:38.938 Removing: /var/run/dpdk/spdk_pid231227 00:42:38.938 Removing: /var/run/dpdk/spdk_pid231881 00:42:38.938 Removing: /var/run/dpdk/spdk_pid232491 00:42:38.938 Removing: /var/run/dpdk/spdk_pid233077 00:42:38.938 Removing: /var/run/dpdk/spdk_pid233472 00:42:38.938 Removing: /var/run/dpdk/spdk_pid233592 00:42:38.938 Removing: /var/run/dpdk/spdk_pid233737 00:42:38.938 Removing: /var/run/dpdk/spdk_pid234647 00:42:38.938 Removing: /var/run/dpdk/spdk_pid235475 00:42:38.938 Removing: /var/run/dpdk/spdk_pid241301 00:42:38.938 Removing: /var/run/dpdk/spdk_pid269631 00:42:38.938 Removing: /var/run/dpdk/spdk_pid272561 00:42:38.938 Removing: /var/run/dpdk/spdk_pid273745 00:42:38.938 Removing: /var/run/dpdk/spdk_pid275077 00:42:38.938 Removing: /var/run/dpdk/spdk_pid275214 00:42:39.197 Removing: /var/run/dpdk/spdk_pid275354 00:42:39.197 Removing: /var/run/dpdk/spdk_pid275493 00:42:39.197 Removing: /var/run/dpdk/spdk_pid275943 00:42:39.197 Removing: /var/run/dpdk/spdk_pid277258 00:42:39.197 Removing: /var/run/dpdk/spdk_pid277999 00:42:39.197 Removing: /var/run/dpdk/spdk_pid278422 00:42:39.197 Removing: /var/run/dpdk/spdk_pid280032 00:42:39.197 Removing: /var/run/dpdk/spdk_pid280338 00:42:39.197 Removing: /var/run/dpdk/spdk_pid280895 00:42:39.197 Removing: /var/run/dpdk/spdk_pid283283 00:42:39.197 Removing: /var/run/dpdk/spdk_pid286703 00:42:39.197 Removing: /var/run/dpdk/spdk_pid286704 00:42:39.197 Removing: /var/run/dpdk/spdk_pid286705 00:42:39.197 Removing: /var/run/dpdk/spdk_pid288924 00:42:39.197 Removing: /var/run/dpdk/spdk_pid291009 00:42:39.197 Removing: /var/run/dpdk/spdk_pid295168 00:42:39.197 Removing: /var/run/dpdk/spdk_pid317871 00:42:39.197 Removing: /var/run/dpdk/spdk_pid321275 00:42:39.197 Removing: /var/run/dpdk/spdk_pid325056 00:42:39.197 Removing: /var/run/dpdk/spdk_pid326004 00:42:39.197 Removing: /var/run/dpdk/spdk_pid327097 00:42:39.197 Removing: /var/run/dpdk/spdk_pid328055 00:42:39.197 Removing: /var/run/dpdk/spdk_pid330812 00:42:39.197 Removing: /var/run/dpdk/spdk_pid333398 00:42:39.197 Removing: /var/run/dpdk/spdk_pid335639 00:42:39.197 Removing: /var/run/dpdk/spdk_pid339995 00:42:39.197 Removing: /var/run/dpdk/spdk_pid339998 00:42:39.197 Removing: /var/run/dpdk/spdk_pid342831 00:42:39.197 Removing: /var/run/dpdk/spdk_pid343031 00:42:39.197 Removing: /var/run/dpdk/spdk_pid343171 00:42:39.197 Removing: /var/run/dpdk/spdk_pid343433 00:42:39.197 Removing: /var/run/dpdk/spdk_pid343439 00:42:39.197 Removing: /var/run/dpdk/spdk_pid344511 00:42:39.197 Removing: /var/run/dpdk/spdk_pid345809 00:42:39.197 Removing: /var/run/dpdk/spdk_pid346983 00:42:39.197 Removing: /var/run/dpdk/spdk_pid348163 00:42:39.197 Removing: /var/run/dpdk/spdk_pid349337 00:42:39.197 Removing: /var/run/dpdk/spdk_pid350516 00:42:39.197 Removing: /var/run/dpdk/spdk_pid354950 00:42:39.197 Removing: /var/run/dpdk/spdk_pid355406 00:42:39.197 Removing: /var/run/dpdk/spdk_pid356685 00:42:39.197 Removing: /var/run/dpdk/spdk_pid357422 00:42:39.197 Removing: /var/run/dpdk/spdk_pid361140 00:42:39.197 Removing: /var/run/dpdk/spdk_pid363108 00:42:39.197 Removing: /var/run/dpdk/spdk_pid366547 00:42:39.197 Removing: /var/run/dpdk/spdk_pid369999 00:42:39.197 Removing: /var/run/dpdk/spdk_pid376499 00:42:39.197 Removing: /var/run/dpdk/spdk_pid380955 00:42:39.197 Removing: /var/run/dpdk/spdk_pid380965 00:42:39.197 Removing: /var/run/dpdk/spdk_pid394238 00:42:39.197 Removing: /var/run/dpdk/spdk_pid394759 00:42:39.197 Removing: /var/run/dpdk/spdk_pid395164 00:42:39.197 Removing: /var/run/dpdk/spdk_pid395579 00:42:39.197 Removing: /var/run/dpdk/spdk_pid396160 00:42:39.197 Removing: /var/run/dpdk/spdk_pid396562 00:42:39.197 Removing: /var/run/dpdk/spdk_pid397086 00:42:39.197 Removing: /var/run/dpdk/spdk_pid397486 00:42:39.197 Removing: /var/run/dpdk/spdk_pid399925 00:42:39.197 Removing: /var/run/dpdk/spdk_pid400135 00:42:39.197 Removing: /var/run/dpdk/spdk_pid403937 00:42:39.197 Removing: /var/run/dpdk/spdk_pid404113 00:42:39.197 Removing: /var/run/dpdk/spdk_pid407347 00:42:39.197 Removing: /var/run/dpdk/spdk_pid409960 00:42:39.197 Removing: /var/run/dpdk/spdk_pid416848 00:42:39.197 Removing: /var/run/dpdk/spdk_pid417261 00:42:39.197 Removing: /var/run/dpdk/spdk_pid419761 00:42:39.197 Removing: /var/run/dpdk/spdk_pid420038 00:42:39.197 Removing: /var/run/dpdk/spdk_pid423039 00:42:39.197 Removing: /var/run/dpdk/spdk_pid426841 00:42:39.197 Removing: /var/run/dpdk/spdk_pid428883 00:42:39.197 Removing: /var/run/dpdk/spdk_pid435252 00:42:39.197 Removing: /var/run/dpdk/spdk_pid440415 00:42:39.197 Removing: /var/run/dpdk/spdk_pid441629 00:42:39.197 Removing: /var/run/dpdk/spdk_pid442283 00:42:39.197 Removing: /var/run/dpdk/spdk_pid452450 00:42:39.197 Removing: /var/run/dpdk/spdk_pid454704 00:42:39.197 Removing: /var/run/dpdk/spdk_pid456696 00:42:39.198 Removing: /var/run/dpdk/spdk_pid462237 00:42:39.198 Removing: /var/run/dpdk/spdk_pid462248 00:42:39.198 Removing: /var/run/dpdk/spdk_pid465146 00:42:39.198 Removing: /var/run/dpdk/spdk_pid466544 00:42:39.198 Removing: /var/run/dpdk/spdk_pid467947 00:42:39.198 Removing: /var/run/dpdk/spdk_pid468686 00:42:39.198 Removing: /var/run/dpdk/spdk_pid470082 00:42:39.198 Removing: /var/run/dpdk/spdk_pid470953 00:42:39.198 Removing: /var/run/dpdk/spdk_pid476261 00:42:39.198 Removing: /var/run/dpdk/spdk_pid476621 00:42:39.198 Removing: /var/run/dpdk/spdk_pid477009 00:42:39.198 Removing: /var/run/dpdk/spdk_pid478573 00:42:39.198 Removing: /var/run/dpdk/spdk_pid478969 00:42:39.198 Removing: /var/run/dpdk/spdk_pid479243 00:42:39.198 Removing: /var/run/dpdk/spdk_pid481692 00:42:39.198 Removing: /var/run/dpdk/spdk_pid481709 00:42:39.198 Removing: /var/run/dpdk/spdk_pid483167 00:42:39.198 Removing: /var/run/dpdk/spdk_pid483650 00:42:39.198 Removing: /var/run/dpdk/spdk_pid483667 00:42:39.198 Removing: /var/run/dpdk/spdk_pid97918 00:42:39.198 Removing: /var/run/dpdk/spdk_pid98660 00:42:39.198 Removing: /var/run/dpdk/spdk_pid99548 00:42:39.198 Removing: /var/run/dpdk/spdk_pid99931 00:42:39.198 Clean 00:42:39.456 03:23:49 -- common/autotest_common.sh@1453 -- # return 0 00:42:39.456 03:23:49 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:42:39.456 03:23:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:39.456 03:23:49 -- common/autotest_common.sh@10 -- # set +x 00:42:39.456 03:23:49 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:42:39.456 03:23:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:39.456 03:23:49 -- common/autotest_common.sh@10 -- # set +x 00:42:39.456 03:23:49 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:39.456 03:23:49 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:39.456 03:23:49 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:39.456 03:23:49 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:42:39.456 03:23:49 -- spdk/autotest.sh@398 -- # hostname 00:42:39.456 03:23:49 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:39.714 geninfo: WARNING: invalid characters removed from testname! 00:43:11.800 03:24:20 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:14.337 03:24:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:17.630 03:24:27 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:20.170 03:24:30 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:23.466 03:24:33 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:26.760 03:24:36 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:29.300 03:24:39 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:29.300 03:24:39 -- spdk/autorun.sh@1 -- $ timing_finish 00:43:29.300 03:24:39 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:43:29.300 03:24:39 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:29.300 03:24:39 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:29.300 03:24:39 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:29.300 + [[ -n 6077 ]] 00:43:29.300 + sudo kill 6077 00:43:29.311 [Pipeline] } 00:43:29.329 [Pipeline] // stage 00:43:29.335 [Pipeline] } 00:43:29.352 [Pipeline] // timeout 00:43:29.357 [Pipeline] } 00:43:29.370 [Pipeline] // catchError 00:43:29.375 [Pipeline] } 00:43:29.389 [Pipeline] // wrap 00:43:29.395 [Pipeline] } 00:43:29.408 [Pipeline] // catchError 00:43:29.417 [Pipeline] stage 00:43:29.419 [Pipeline] { (Epilogue) 00:43:29.432 [Pipeline] catchError 00:43:29.434 [Pipeline] { 00:43:29.445 [Pipeline] echo 00:43:29.447 Cleanup processes 00:43:29.453 [Pipeline] sh 00:43:29.740 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:29.740 496545 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:29.762 [Pipeline] sh 00:43:30.056 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:30.056 ++ awk '{print $1}' 00:43:30.056 ++ grep -v 'sudo pgrep' 00:43:30.056 + sudo kill -9 00:43:30.056 + true 00:43:30.068 [Pipeline] sh 00:43:30.352 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:42.560 [Pipeline] sh 00:43:42.847 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:42.847 Artifacts sizes are good 00:43:42.862 [Pipeline] archiveArtifacts 00:43:42.868 Archiving artifacts 00:43:43.353 [Pipeline] sh 00:43:43.688 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:43:43.700 [Pipeline] cleanWs 00:43:43.710 [WS-CLEANUP] Deleting project workspace... 00:43:43.710 [WS-CLEANUP] Deferred wipeout is used... 00:43:43.717 [WS-CLEANUP] done 00:43:43.718 [Pipeline] } 00:43:43.731 [Pipeline] // catchError 00:43:43.740 [Pipeline] sh 00:43:44.022 + logger -p user.info -t JENKINS-CI 00:43:44.030 [Pipeline] } 00:43:44.043 [Pipeline] // stage 00:43:44.047 [Pipeline] } 00:43:44.060 [Pipeline] // node 00:43:44.065 [Pipeline] End of Pipeline 00:43:44.104 Finished: SUCCESS